- Reproducibility between robot and human movements: preliminary development of a robotic device reconstructing therapeutic motion
- Publish Date : 2020/12/30 Vol.20
- Report Outline :
- Purpose: Robot-mediated therapy is a promising approach for restoration of upper limb motor function after stroke, but it has not demonstrated the expected effects because of the inability to reproduce the flexibility and complexity, which are associated with assistance skills of therapists. The purpose of this study was to develop a preliminary dicephalus (DiC) system and provide preliminary data on the reproducibility between motions of a robot and therapist.
Subjects and Method: The assessment for each human and robotic assistance comprised 10 movement cycles, including elbow flexion and extension. Seven volunteers were seated with the right forearm and upper arm fixed to the DiC system. One therapist was ins structed to make 10 similar elbow flexion and extension movements to assist in patient elbow movements. After therapist assistance, the DiC system reproduced the 10 repetitive elbow flexions and extensions made by the therapist. The highest and lowest elbow angles in each flexion and extension cycle and the time at which those angles were obtained were measured.
Results: The intraclass correlation coefficients of the highest and lowest elbow angles was 0.96 (p < 0.0001) and of the time for obtaining those angles was 0.96 (p < 0.0001) between human and robot assistances. Bland-Altman plots showed interchangeable differences in the time between human and robot assistances (96.4% within 2 standard deviations).
Conclusions: The DiC system shows excellent reproducibility between human and robot assistances and may be effective for upper limb training in stroke patients. This system was preliminarily developed for the rehabilitation of upper limb motor dysfunction after stroke.
Keywords: robot assistances; occupational therapy; rehabilitation; stroke; upper limb
- Extraction of tongue coating area from tongue image for automated tongue diagnosis
- Publish Date : 2020/08/03 Vol.20
- Report Outline :
- Purpose: To automate tongue diagnosis, we propose a method utilizing machine learning to extract the tongue coating area from tongue images captured using a tongue image analysis system (TIAS).
Subjects and Methods: Tongue images were captured using a TIAS and a fluorescence imaging system from 11 participants (20 to 24 years old), and only the tongue coating area was extracted from the images. For extraction, the TIAS and fluorescence images were segmented into superpixels, machine learning was performed based on the features of the corresponding superpixels to obtain information regarding the presence of tongue coating, and the coating areas were extracted from the TIAS images. Furthermore, cross-validation of the leave-on-out and comparison of the performances of a support vector machine (SVM) and random forest (RF) were performed.
Results: Two machine learning classifiers were built for tongue coating extraction. With the use of these classifiers, which utilized the SVM and RF for learning the data, the percentage of correct responses was approximately 86% for both the classifiers, and this accuracy is similar to the those obtained in previous studies.
Conclusion: We proposed a tongue coating discrimination method utilizing feature analysis with an accuracy equal to or better than those of previous studies. Our proposed method is superior to the conventional method because it can analyze both the tongue as well as its peripheries. However, its accuracy is low for cases involving thin tongue coating (white coating, etc.), and the accurate extraction for cases involving white coating is difficult using our method, which forms a future direction of research.
Keywords: Machine learning, Tongue diagnosis, Eastern medicine, Support vector machine (SVM), Random Forest
- Choosing an Obi Suitable for Kimonos Eliciting a Sense of Hannari
- From a color-based perspective -
- Publish Date : 2019/08/12 Vol.19
- Report Outline :
- Purpose: Matching the color of a kimono with that of an obi is an important of creating a beautiful kimono arrangement. This experiment aims to clarify what particular "color" features of an obi fit with hannari eliciting kimono images. We used a thought algorithm based on a kimono-expert’s selection process for suitable obi, and examined whether the instructions were effective for a person with no kimono knowledge.
Participants and Methods: One kimono expert and 12 graduate students with no kimono knowledge were asked to select whether two top and bottom images were suitable for five kimono images. Next, the specialist's thought algorithm was clarified by protocol analysis. And we instructed a student in specialist's algorithmic thought process, and evaluated whether it was effective. Finally, RGB and u’v’ of obi images were requested, to clarify what kind of color characteristic there were in the obi that were considered to be suitable for the kimono images by discriminant analysis.
Results: Once RGB and u', v' characteristics of the colors of suitable obi were made clear, and the particular characteristics of the color was shown when a "suitable obi" was chosen by discriminant analysis. However, the instructions of the expert’s thought algorithm were not effective.
Conclusions: About 80% of "suitable obi" selected by students without knowledge of kimono are identifiable by "color" on discriminant analysis. For experts this was about 50%. This suggests that students who do not have knowledge of kimono chose based on their color preference, while experts considered factors such as “status" and "season".
Key Words: Kansei, Color, Kimono, Obi, Discriminant Analysis
- Differences in manual exercise therapy skills between students and therapists
- Publish Date : 2019/04/02 Vol.19
- Report Outline :
- Purpose: We have developed a hemiparetic patient arm robot (Samothrace: SAMO) for repeated practicing of manual exercise therapy. In this study, and our aim is to quantify the differences in manual exercise therapy skills between students and therapists.
Subjects and Methods: The subjects consisted of one occupational therapist and three fourth-year university students. Examples of elbow joint exercises were displayed on a PC screen, and while observing the examples, the subjects passively flexed and extended the elbow joint of the arm robot, with the exercises being recorded by SAMO.
Results and Conclusion: When comparing the movement of the elbow joint of the robot, the maximum flexion angle of the robot arm was significantly smaller in the case of the students than the occupational therapist, and the maximum extension angle was larger for the students than the therapist. Further, the maximum angular velocity and maximum angular acceleration with which the students moved the elbow joint of the robot was significantly higher than those of the occupational therapist. The results obtained showed that the frequencies of articular movement by students were smaller than those in the examples and those of the therapist, and the cycle of joint angle changes was prolonged. In addition, the force applied to the robot arm by the students had a longer cycle than that in the examples. These results verified that, compared to the therapist, the students were not fully versed in the passive exercises corresponding to passive abnormal muscle tone in an elbow joint flexor group and an extensor group.
- Fall prediction accuracy of visual spatial abilities tests in patients with Alzheimer's disease: a retrospective study
- Publish Date : 2019/03/22 Vol.19
- Report Outline :
- Objective: The purpose of the present study was to conduct five visual spatial abilities tests frequently used in Japan (clock drawing test [CDT], overlapping figures test of the Visual Perception Test for Agnosia [overlapping figures test], construction of the Japan version of the Alzheimer's Disease Assessment Scale [constructions], intersecting pentagon copying test [PCT] of the Mini-Mental State Examination, and Yamaguchi fox-pigeon imitation test [YFPIT]) on patients with the same disease, and to compare the fall prediction accuracy of these tests.
Methods: The participants comprised 35 Alzheimer’s disease (AD) patients (average age: 80.5 ± 6.5 years old). We compared the results of the five visual spatial abilities tests using the χ2 and Mann–Whitney U tests. We performed a receiver operating characteristic (ROC) analysis using evaluation indicators that showed a significant difference between two groups as independent variables and calculated the area under the curve (AUC) and cutoff value.
Results: Only in CDT (p = .032, effect size: r = -.36) and overlapping figures test (p = .020, effect size: r = -.39) were the results of the fall group worse than those of the non-fall group. For CDT, the AUC of falls was 0.711 (95% conﬁdence interval [CI]: .538–.884, p = .033), while the sensitivity and specificity at three cutoff values were 82.4% and 55.6%, respectively. For overlapping figures test, the AUC was 0.699 (95% CI: .524–.875, p = .044), while the sensitivity and specificity at one cutoff value were 55.6% and 82.4%, respectively.
Conclusion: The AUC of falls for CDT and overlapping figures test, which both showed between-group differences, indicated that overlapping figures test had low fall prediction accuracy and the CDT had moderate prediction accuracy. Thus, CDT can be considered a simple visual spatial abilities test that can be used for screening and predicting falls in AD patients. Further investigations with a larger sample size are required.
Keywords: Visual spatial abilities, Fall, Clock drawing test