Chapters Transcript Video Engineering Better Robotic Interfaces for Physical Medicine and Rehabilitation Jeremy Brown, Ph.D., presents at the Johns Hopkins Department of PM&R’s Grand Rounds on January 19, 2021. today, I'm gonna be talking largely about um you know, the work that we've been doing in our lab and the haptics american robotics lab on the Homewood campus, um engineering robotic interfaces. I said for physical medicine rehabilitation, it's a little bit of a stretch. You'll see that some of the work that we do is kind of outside the PM and art space, but a lot of the work that we do is is very clinically oriented and and and clinically relevant. Um uh so I'll just jump kind of jump right in with kind of the formalities objectives. Um you know, give an overview of tele robotic devices which are the class of robotic systems that we worked the most with that generally involves human in the loop. Um application of haptics in prosthetics. This is by and large the work that I've been doing for the longest uh and then application of tactics in tele robotic surgery, which I said, you know, kind of there's a little bit away from P. M. And R. But I think some of the work that we're doing there, especially because of some of the collaborations that we have um will carry over into P. M. And R. Especially in the motor learning space. Uh And then I'll end by just kind of showing some of the late breaking things that we've been trying to get going and continuing to go kind of in the covid era. Um So uh in terms of disclosure, I really don't have anything to disclose. Um So you're kind of going forward with that. Okay, so to start off um you know, it's probably you know, I don't have to, you know stress this to this audience. I know, but we use our upper extremity our hands uh inland to do dexterous tasks of varying levels of complexity and difficulty in our daily life. Um you know, whether it's for hobby and pleasure or for work or or or what have you um you know, we we do a lot of complex movements without maternity. Uh you know, I also probably don't have to stress that this audience uh that um you know, a lot a large share of the reason why we can do such, you know, dextrous tasks has to do with the fact that we get nice century Africans from the from the periphery um you know, back to our central nervous system. Uh and so the large share of my work focuses in on those century Africans, haptic feedback in particular. Um and why have the feedback is important in robotic systems, Especially robotic systems where you want the operator to have fine level dextrous manipulation and control capabilities. Um You know, we know that happens is important but I'll just kinda show a cool video that I think it's cool, you may have seen it already. Um that focuses in on just kind of what happens is important for just the motor control perspective. Um There's a very old, you know, video out of rolling Johansson's group and what you're gonna see in this video is a participant light a match. Um you know kind of in a standard like you would like to pick it up out of the matchbox and and striking on the um on the strikes and that's on the box the like right? Um Then what they did is they applied local anesthesia to the fingertips of this participant and asked them to do the task again. Um And you will see here that the performance looks significantly differ. So you know um what can we learn from this? Right? Well one you know there's a you know you can see you know what the effect of just local anesthesia and and the removal of some of these sensory apparatus, instantaneous tactile Africans do uh in terms of you know being able to do find level dextrous manipulation. Um The protective it was still able to perform the task right? A lot of it has to do with the fact that vision was so loud uh And we know that we are visually dominant uh individuals. Uh and you know as long as we can visually estimate the scene, we can still reliably work and and operate in the environment. Um I would argue that you know a lot of the robotic systems that we work with the class of robotics robotic systems um really are you know attend to this scenario where there is limited sensory if any sensory apparatus in terms of plastics um and mainly operated through the visual uh you know, visual servo and visual estimation of the info. Okay, so um you know, let's you know, kind of drive this into some of the work that we do. Well, we know that there are a lot of, even though we can directly manipulate objects in our own environment, there are a lot of environments where direct manipulate, direct manipulation is not fully either because the environment is located at distances, it poses hazards to the body is from a different scale than our bodies are. Or we've somewhat contrived ways of limiting access is in the case of minimally invasive surgery. They're also scenarios where they the body itself has changed uh and uh you know, we want to be able to do the same types of direct manipulation that we do with our healthy impact them in the case of amputation. Well, what we need in both of these scenarios as an interface. It takes uh you know, the intent that we were trying to carry out on the environment and basically carries it out on our behalf. Um these interfaces are originally called tele operators. Um nowadays, you know, kelly operators truly really don't exist except for some niche scenarios. Um what we really have is a class of symptoms that I called kelly robots right? Um that allow us to operate at a distance. Um and in our lab we like to think of an upper limb prostheses is essentially a telegram tell the robot because you're operating this pistol hand from more of approximate location on generally through E. M. G. Uh Okay. Um So you know because these are robotic systems um you know we've done really well over the last decades of doing um you know great 11 mapping, you know what, you know whatever emotions you you input into the system. The robot carries those motions out on the environment. Because these motions are uh you know because these are robotic systems and they have a wonderful set of suite of sensors. We can measure all of the interactions between the environment and robot. But by and large that information this uh you know um environmental interaction information. Um It doesn't really pass back to the operator except maybe through vision. Right. And so in my lab focuses in on is how can we had had these haptic interfaces back to these systems um in order to provide this uh you know potential er haptic feedback to the operator. Um When you look a little bit more closely at this, you know, kind of haptic interface for these robotic systems, you'll find that by and large, you know, the majority of it is not high fidelity if it exists at all. Uh And so we are you know, our lab focuses heavily on understanding what is the right way to add haptic information. You can really tell the robotic systems. Um when do you need it When you do not need it, what types do you need? Um And and the like and kind of the whole spectrum around there. Um Just kind of in the case in point, right, I talked about two different tele robotic systems. Are, I should say I showed you this, tell a robotic systems and one that we work with heavily in our lab robotic surgery surgery devices like the da Vinci on the left um and upper limb prostheses. Um You know like the mpl that you see on the right, you can imagine, you know, both of these systems, you know wonderful control um you know, amazing degrees of freedom. But you know, a task such as simple as the one that I talked about before taking a match and striking on the matchbox. Um If it's difficult task even to perform with these complex robotic systems that mimic the same and have the same degrees of freedom as a natural land. Um and allow the same level of control um as our natural limbs. Um These paths are still, you know, not trivial to perform. Um And so um you know, again, that kind of drive home, this is where we think or this is where my labs interest and focus our lives. Okay. Um So I'll kind of start off by talking about, you know, a little bit of the work that we've done in the upper limb prosthetics space as well as the work that we've done in the robotic middle medium base of surgery space. The work that I'm gonna talk about in the upper limb prosthetics. Space has been done by my PhD student, um, thomas uh and uh along with the lab now Garrett, um uh and this is work kind of looking at and evaluating happy feedback in upper limb process. So, uh, you know, it's it's known right, that uh, you know, in, even though, you know, despite all of the advancements that are going that are being poured into prosthetic technology, especially upper limb prostheses technology today, um, you know, really they're they're true clinical options for most amputees, right? The body powered prosthesis, uh, which we know is, you know, very old technology, um, um, you know, post Civil War era. Um, and then the, my electric prosthesis. Um, and you know, the differences between these devices, you know, we can count them off. They're, I mean, they're not the numerous differences between these devices. Um, we are, you know, from, from an engineering standpoint, very interested in the body power prosthesis. Because when I started doing this work, um, you know, we would talk to prostitutes, we talk to clinicians, we talked to amputees and they would say, well, you know, in addition to my body power device being pretty simplistic, easy to fix and things like that because of this nice bond cable that connects the preh inter to the shoulder harness that I wear and get some inherent haptic reflection. Um uh You know and I can feel some of the interactions between uh you know um what what happens with my pre enter? Um I can feel it in in my shoulder through the paper. Um And so this was kind of an anecdotally known um you know amongst like I said amputees process commission uh and the light. Um And so I'm going to very quickly uh and this will be the only experiment and results that I talked about today. Uh Take you back to a study that I did when I was actually uh in grad school at michigan. Um And again, this is the only one that I've done everything else is going to be by my students or I should let you know that just in case anybody's checking pendant here, I did invite some of my students because by and large color this work was done by them. Um But a study that we did um you know, when I was a grad student in michigan, where we were interested in empirically evaluating uh the utility of this haptic feedback in body power devices um to see if we can, you know, basically come up with an empirical result that matched this anecdotal data that we've known for some time. Um And so you know, the problem with, you know, trying to evaluate the utility of haptic feedback in a in the body powered devices that you rely on that bounty cable, both for control and for the feed. Um and so uh it it becomes a little bit tricky to try and figure out how do you tease apart, control from feedback. And so we kind of separated out and asked this other question. Well, what happens if I remove the feedback part from the body power parties? Uh you know, does it does it lead to a degradation in performance and some, you know, Right that involved hopefully involves some element of helping many feedback from 19%. Um and so you can, you know, kind of going back to this, you're thinking of a prosthesis as a telephone operator as we often do in our lab. You know, it's essentially taking this inherent force reflection that you get from the device and removed. Now you can still do good control to the environment, but you don't get any of that information interaction information back from the um so what I'm going to show you is a series of videos that we did to kind of go through the engineering process to be able to test this idea of. So we started with um and this is a collaborator of mine who was in michigan at the time. Um uh we started with this, you know, kind of a standard body powered prostheses as you can see in this video here, um you know, where you have a shoulder harness bound cable connection to a voluntary opening device. Um We changed the bowden cable from connecting to a shoulder harness here to a skeleton that we wanted on the left arm here. Um And that was largely because the exoskeleton is a single degree of freedom device and it was just much easier to control than something like the shoulder which we know the multi degree of freedom point. Um But you know essentially it's the same principle, right? You you know, uh flexing the arm basically close on the mountain cable the same way as it does with the shoulder harness. And then we just made the switch from a voluntary opening to a voluntary clothing device. Because while, you know, you do get some happy reflection involuntary opening prostheses, it's really really when the monetary clothing devices um where you can actually feel your grip force um you know, transmitted through the cable um that we thought would be the most robust way to. Yeah. Ah So then, okay now we've got this shoulder controlled prostheses right? Or this exoskeleton controlled prostheses that functionally works the same way as kind of the clinical variant. Again, the problem though remains if I cut that table right now um you know, basically cut the ability to do control with the prostheses and feedback on the process. Um And so what we decided to do is okay if we cut the cable, maybe there's a way you know, can we come up with a clever way of reconnecting him and we decided to do it electro mechanics um with two electromechanical electromechanical actuators. Um So essentially it looks like this. Now they have two separate bowden cables, one going to the elbow, exoskeleton, one going to the prosthesis. Um And they basically aren't physically connected. They're connected essentially in software. Um And then through these electromechanical actuators that do the pulling on the respective tables. Um So basically what we do is we haven't anemometer on the exoskeleton that measures the angle of the exoskeleton and we use that and we map that uh to the prosthesis actuator that this little linear slide basically controls the pulling on the mountain cable that opens and closes the um And so this is a video of this thing uh working. And so you can pay attention Now, I don't know if my my mouse is showing, but at least to these motors sitting on the table back here uh to see exactly what's happening when you flex when the participant flex failed. And so you can see that basically, even though the physical cables are connected, we still get essentially 1-1 mapping between the input position on the exoskeleton and the output aperture, let's say of the preempt now what this affords us um is the ability to turn the feedback off because now all we have to worry about because the exoskeleton um has a cable about. And cable connected to it, all we have to worry about is the tension in that cable, right? If we make that tension essentially zero, there's no force reflection. And we can control that by basically making the linear position of this exoskeleton actuator basically perfectly tracked the angle on the arm. Um So there's no resistance to the movement on the arm. So we can turn the feedback off and then if we want the feedback to be back on and then all we need to do is basically monitor the tension on the prosthesis side, the bowden cable that's pulling that's operating the freelancer. Um and essentially proportionately that tension the detention that's on the X axis. Um And so each of these has nice little load sales and measures the force or the tension in the cable on each side. And you know, in in software we basically just do a proportional mapping to make sure that the forces are exactly the same. Or some scale variation of the uh Okay. Um and so we can use an exoskeleton actuator to basically drive and produce force on the bowden cable in the exoskeleton to produce the force feedback akin to what you would feel if it were also in the shoulder. Um So we ran a simple experiment. Um You know, I won't go into too much details but we invited participants. These are all able by these individuals. So you'll notice that the prosthesis again um You know, had this nice bulge at the end. Um And also you can put your fist inside and able bodied and unemployed participants could perform these experiments. Um The uh we we had the task is very simple. It's just object identification. So we had these little foam blocks that were different differences and you will see just kind of repeated as a theme throughout some of the experiments that we do. Um And the phone blocks came in different stiffness is um uh and all we asked participants to do is basically squeeze the block and recognize it as being one of the three stiffness is we refer to them that soft, medium hard and basically sort them in the appropriate box. Um depending on their stiffness. Um And so the result that I'll just highlight really quickly is one of the were they in the four conditions that we had, the four conditions that we had were no feedback at all. So no vision and no haptics, haptics only. So only force reflection on the exoskeleton vision only. So no haptics. But you can see everything. And then of course haptics plus vision. Um So these four conditions that we wanted to see how participants performed tasks and what we found is um uh this kind of again, what we expected kind of based on anecdotal data, you know, kind of data that we knew are already going into this experiment basically what we know and what we expect when you have, you know, two pieces of information is better than anything else, right? If I can see it and I can feel it. You tended to perform better than all of the other conditions. Um At the same time though, if you could only choose to explore vision with this experiment. Again, the task was object identification. So the fact that rely heavily on being able to reason your force based on your displacement if you're squeezing an object to give it stiffness. But it, you know, our results show that, you know, force feedback was better in this case than visual estimation of something like stiff. Um and then kind of as a confirmation that I know feedback case uh you know, was actually no information at all. It's just a shot in the dark, you know, with without any information at all, you're basically guessing, you know, it was, you could, you know, choose one of three objects here. So the guest rate was roughly 33% um and absolutely found with no feedback. Um So, you know, this is kind of a result, you know, where we said, okay, let's take a clinical system, you know, engineering in a way that we can begin to test hypotheses with it. Um we were able to confirm, you know, kind of empirically uh that uh you know what we've been, you know, hearing kind of all along anecdotally from clinicians and amputees and you know what um you know, and so it's not an amazing, you know, profound result, because it was already well known. Um, but at least we can say we've now got data and evidence to back this up. Um, so I will kind of move on from that and again, everything I've talked about from now on is all the work of my students. Um, you know, we know that body powered prostheses are just, you know, even though they have this inherent force reflection, they're limited where you generally get limited degrees of freedom, probably single degree of freedom opening and closing of the freelancer. Um, you know, when you talk to MPT, they're not aesthetically pleasing all the time. You can't put nice cosmetics is over them. Um, you know, they are simple and mechanical. So depending on what an amputee is doing, whether it's, you know, kind of working out in the yard, they may prefer this device over something like, or where they're gonna, you know, maybe start to sweat, right and start to mess with the conductivity with the MG electrodes, they might prefer this device, but, you know, if if we can continually improving the way that I feel is going, it's continually to improve my electric devices, that's that's attractive. We're heading in um in our lab is going along in that direction as well. But as it stands right now, my electric prostheses don't have any haptic feedback. And so my lab is focusing in on how can we add haptics into these, by electric car scenes. Um everything that we're doing now is I will say is you know, noninvasive kind of over the skin type of haptic feedback. Um so more of a sensory substitution type of haptic feedback that many labs out there um that are doing, you know, either um you know, uh stimulation critically or at the periphery. And we think, you know, I think that work is the future. Um I think what we're doing in our lab and what I'll show you is actually just trying to say, you know, trying to better understand the science behind what is the, what is the right information to send? When do you need to send it? And how do you design and engineer your devices to be able to um you know to better recognize and in many respects when that information is needed to the end. Okay. Um so this this next study I'm gonna show you is actually some work that we've done. Um you know, it's it's you know, it's widely accepted in the field that when you had haptics is a symphony substitute to my electric prosperity performance gets better. Um you know, we've had some studies in our own lab that is that is uh you know, showing this result as well here. What we were very interested in is, well, okay, we've done, you know, as a field and the haptics field we've done a number of studies that have shown this positive benefit of tactics in these. Um But by and large clinical devices still don't have it. And so what's the bottom that's why you know um You know and so we you know started to ask this question. Well one lets you know how does this benefit of adding haptics compared to let's say a clinical prosthesis. So your standard biological prostheses and how do they compare to you know kind of the gold standard are healthy intact. Um And so we will set out to do this comparison between these three scenarios. You know are kind of lab contrived prostheses with haptic feedback. You know your standard clinical prostheses. My electric prostheses without it in the natural limb. To see how these two compare these three let's say compare against each other. And we were looking at it from two different ways. One is just more of a task performance. So if I give you a task that involves something like force and requires force information to complete the task. And I'll give you that information back um as haptic feedback in this case preparation feedback because that's what we decided to use for this experiment. Um You know how well do you do? And then also we started thinking about you know you know we know that vision is known that vision carries a very high cognitive load. Well if I add haptics in does that reduce the cognitive load or does that make it even worse now because now you have to pay attention to this additional sensory um you know that's in this referred setting. Um So we wanted to look at what is the you know kind of um you know cognitive implications in terms of processing of this additional happening information in these three scenarios in addition to task performance. Um So you know what we have here and I'll kind of walk you through starting, you know, maybe I'll go clockwise starting at the top right. These three objects again here they're they're all silicon now instead of phone. Um uh And but there's still three different differences on medium hard, literally look about the same. Um You know, we tried to mask that still by wrapping them and you know kind of black fabric. So you can look at any visual imperfections in them in the bottom right. You see um you know, a picture of the image of the setup itself. You know you basically put your through the little window um and squeeze the blocks. That was the task. And you had to you had to basically um uh We basically gave you two paired blocks. Either a soft medium, a medium hard or soft hard. And you had to identify whether the first or the second block was different. Look at your your standard 200 and forced choice paradigm. Um And so um uh you know what we have on the bottom left because we're using the impact hand. Um This is a P. S. A resistive center. Um very similar to the work that luke Osborne did. Uh 50 staff horse lab. Um you know where we can measure basically you know pressure on on the finger we put that same sensor on the end defector, the digit of the voluntary clothing prosthesis. This is another mark practices that we use in our lab. This one comes courtesy of backfire and Mark Hopkins um group you know are you know um that really helped us design this device. Um And again we're measuring the forces at the thing you know at the at the digit and we're killing that black as a vibration but you can see here this micro tactical strap right below the bicep. Um And then in collaboration with a colleague of mine, Hassan Ayash was at Drexel University. We've been using functional near infrared spectroscopy um which looked at you know um level of optimization um in the prefrontal cortex and then died at the cognitive load. I'm not gonna pretend to understand all the science behind that. So I apologize if you eventually have questions about that and I'm gonna fail to answer them. Um But um you know we've you know we've looked at other modalities you know E. G of course as another another modality is looked at for doing, estimate the cognitive load. We also looked at problem is we can't use any of electromechanical devices you know next scanner. Um E. G. Just has a lot of you know as you are those who use the E. G. Before it just takes a long time for setup. Uh You know uh you know in terms of uh you know jelling all of the electrodes that you want to use. Um And uh you know so we decided to go with engineers. There's been some work comparing comparing these modalities together. Um And it's it's you know engineers is reliable. Okay so what do we find? So what you'll see here is that we've got this kind of tri modal distribution basically the response depending on what object pairs that we get. So soft medium medium hard soft hard and you can see on the X axis at the bottom. Um That's just what the Mhm has stand for. Um And those are the pairings so I will kind of abstract this out for you and just focus in on the one in the middle the soft medium. Um I can always try and answer questions about the other ones but this is where we saw our strongest result. Uh so basically what we found um is that uh the uh you know as we expected right the impact healthy limb gold standard right. I mean 100% accuracy. 100% of the time we gave you two objects you were able to differentiate. Um the clinical standard prostheses to my electric without haptics um you know minute around 50%. So guessing essentially right to give you two objects and you eventually guess which one you thought was right and about 50% of the time you were correct. Um But then when we add the haptic feedback in which is just like maroon wavy pattern uh block here performance got significantly better right? You know clearly still significantly lower than what the hand is because the hand is perfect but still you know significantly better than what you know the kind of standard political processes. And then when we look at cognitive load um you know it's the story is very similar right? Uh and what you're looking at here is just one of the opcode. So I think this is left medial. I believe it is now. I believe it's left media I could be wrong. I have to go back and look at the paper getting to be 100% sure. But what you're looking at is a change. And uh you know oxygenated hemoglobin um from baseline. So we took a baseline measure before the experiment started. Uh And then we did um you know measure after the experiment and what you see. Is that the natural hand. The solid blue, you know smallest change from baseline. Right? So the task was not cognitively demanding with your intact hands. Um The standard you know the standard prosthesis um You know very large change significantly larger than the hand which is this yellow polka dot and then when you add to have to feed back in, um it actually reduces the cognitive effort. Right? So, you know, we were kind of, you know, we we hypothesize that it would but we were also aware that there was a chance of having to process this extra information which basically increase cognitive load because now you have to pay attention to this extra channel of information but in this case it seemed to reduce the amount of cognitive effort at least compared to the standard clinical prosthesis. I'm still very, you know, high, significantly higher compared to the to the impact hand. Um but it means that we're going in the right direction at least. Um and so there's a lot of efficacy and utility in, you know, having, you know, feedback whether it's you know century substitutes like we use our direct peripheral um you know, like a lot of other labs looking at has a lot of great potential utility and improving, you know, kind of the function for processing. Okay, so I'll wrap up, you know, kind of with that. I've got one more thing that there are two more things that I'll show you the prosthesis spaces are kind of late breaking ongoing stuff. So not a whole lot of results to really talk about just yet there, but I'll at least show you the setups later on. I'll jump into the work that we were doing in robotic minimally invasive surgery. Again, the first project that I'll start with lily has not a whole lot to do with uh I would say the PM in our space but I'll, you know, I'll get the next study is starting to track in that direction. Um So anyway, as I mentioned before the Da Vinci robot, um you know which many of you may know about right? It's great robots use all the time in the surgery department, especially for general surgery. Urology, gynecology. Um you know, but right now as it stands, all surgeries operating this device without having feedback, all visual feedback. Um you know, at the moment. And so while expert surgeons learn um to account for this lack of haptic feedback in their motor plan. Um novice surgeons um you know have a very speed learning curve when trying to learn how to use this device. And so what we were very interested in is, well, can we use haptic to the way of speeding up um you know, training for these novice surgeons on these complex robotic devices. And so what we have is a simple setup here where we put a kind of a standard inanimate task, this ring roller coaster that so the object objective is to move the ring along the track. Um you know, and not let it drag on the track at any, you know, at any point. Um we put that on top of a forceps um and so we can measure all of the interaction forces. Now, as you're doing this simple task and if we take that force and we map that to these little risk though, these little simple wrist worn tactile devices, which basically essentially you think of as wristwatch where the harder you press on the task, the larger the force you produce, the tighter we squeeze the strap around your wrist. Right? So it's a proportional mapping you sense force and you produce basically a squeezing kind of pressure on the wrist. So we had this simple task where we basically asked participants to do this experiment over and over again for about 12 trials because we wanted to look at learning over the course of these 12 trials. And we compared that to this to um you know, kind of the control condition or the natural learning condition where you didn't get any happy feedback. You just went and did the past repeatedly over and over and over and over again. And so what we found, um is that the and you're looking here at essentially a log transform of the RMS. Of course, we measure the force over the entirety of proud compute the root mean square of it. Um and that's what you're looking at here. These are the results of the linear Muk model which basically say that in with the feedback group produced significantly less force than the no feedback group over the entirety of the all 12. Um you know, significantly lower force. They started off lower and continued that way. Um you know, for all 12 trials. And so we were saying, you know, that was kind of a, you know, a nice like, yeah, okay, haptics really works and it leads to improved performance here because forces in this case are generally a bad thing because they come from either directly contacting the track or pulling the ring on the track itself. Uh and so the lower the force that that means, the more the ring is in the center um or the yeah, the, sorry, the track is in the center of the ring. And so you're really moving it in space. Um and so it seems like adding, happening then, you know, right away. And these were all, let me just let me just back up really quickly. They these were not clinical participants. Um these were uh you know, basically participants that we, you know, um um largely undergraduate population and clinical, but it never used it eventually before. So it came in for this experiment just to use the da Vinci. Um and what we see is that compared to a natural learning? They produced lessons we were then saying, okay, well what does this mean in terms of kind of a speed accuracy tradeoff in terms of how long it takes them to do the test. And so now I'm starting to go into more than more than learning here. We didn't actually just, you know, we didn't force them to change their plan, we were just saying, okay, how fast do they go? And what we found is that kind of as expected when you get this haptic feedback extra information to process and it takes you a little bit, a little while to get used to processing this information. So you start off slower than you do in the no feedback condition. But you know, trial after trial after trial, participants in the feedback condition got faster, right. Um, and in fact, they, they got faster and a significantly faster rate than participants in the no feedback having broken this. Um, you know, now, if we carried this out for an additional, let's say eight trials, you get to 20 what would have happened, I'm not 100% sure, you know, with, with these, with these, you know, would they, you know, essentially get to a point where there are no different from one another that's yet to be seen or would they just plateau exactly where they are? But at least we can see that the, even though there's like an initial penalty, adding the tactics and Within 12 trials and each of these trials was like 30-45 seconds, participants just got significantly faster and faster, which is, you know, I think, you know, not uh, you know, um, mind blowing, you know, kind of result here, you know, just repeatedly doing things over and over. You're just gonna get better and better and better, but they did this, they got faster while also keeping their forces significantly lower. Okay, one of the studies that I show is some work done by Guido who is a visiting student of mine and this is actually in collaboration with Gabriella Con Serero. Um um um some work that we did also in the surgical training space. Um and this Guido was a visiting student of mine, um you know, in my lab for about six months from Italy. Um he's now a PhD student um at the University of Stuttgart. Um and he's affiliated with the max Planck Institute in Stuttgart Germany. Um So anyway, um, Guido kind of came in with this idea that like, hey, when I look at surgical robotic training, um there's kind of two different flavors, right? There's VR training, right? So they've got these desktop trainers, um, you know, as well as the da Vinci itself comes with this nice little B Bss, the da Vinci surgical simulation. The fact that you can put on and you know, kind of VR practice all of these different and then, you know, you've got show training on the real robot itself. So inanimate task. Excise tissue. Um, you know, animal models, cadavers, things like that. Um, you know, by and large when you talk to clinicians a lot of times they will say, well, the real robot is still the gold standard. Um There's something you know empirical evidence to back this up that says that hey training on the real robot, you actually learn to incorporate and understand the dynamics of the robot itself in your let's take control of of the system. Um And so we kind of said okay well from a training perspective are these two systems equivalent? Uh And so what guido set out to do is basically build this virtual and inanimate analog of the same training task. Um So what you're seeing here on the left hand side and this is the video and I played in a second. Um Alright maybe I'll start on the right hand side But that's what originated from. We don't build this inanimate needle driving platforms. So it has these three rings that are basically face 45° from each other when it you know kind of um I would think 0° and one at 90 and then one at 45. And the objective is to take a suture needle and drive it through the rings without deflecting the rings in any direction. Um The um Then what we know did the rings do have visual feedback. So they led lights that go around and light up depending on how um you know how how you know the error that you produce and and the intensity of the light. And um you know the location changes on the deflection of each of these little individual grains. Then what we did is he took his three D. Models that he built for this system and converted it into these like nice three D. Meshes in a VR simulator um where he basically has the exact same system um in the VR environment. Um and he can control it both the physical system and the VR system from the same interface. The Dprk here, which is the Da Vinci research Kit robot, which is an open source version of the Da Vinci robot that we have up on the Homewood campus. Um all very studies related to robotic surgery is playing this video. These aren't exactly the same. So they're not gonna be time block but you can get an idea of of how you can take the same interface or the input from the human side and drive you know, either the virtual environment or the real tools in the physical environ. And so you can kind of get the coloring on the right hand side. This is actually taken from the viewpoint of the robotic endoscope which is why the color is not perfect. And it's got this nice little green. Eventually what we did, what we did is we ran a study where we had um you know basically two groups of participants A and B. Um and um and we ask every participant To uh do baseline a baseline measure of their performance in each of these respective platforms. I understand for inanimate or the real world. VR of course is working reality. So 15 repetitions of the task in in each of these platforms. Then we did training in each of these tasks at three different speeds, um you know, slow, regular and fast and I'll talk about that the reason for that a little bit later. Um But basically they we gave them a chance to do 40 repetitions of the task. Now, in this on the same platform, if you started doing baseline on inanimate, you stayed on inanimate for training, same thing for VR. And then we did an evaluation Um you know, 15 repetitions of the same task on that same platform. And so we wanted to see their performance change from baseline to evaluation because of the training block that happened in the middle. Um And then as a final step we did what we call the cross evaluation where if you did all of your baseline training and evaluation on the inanimate platform, we then swapped you and you did across evaluation on VR. Likewise, if you did baseline training evaluation on the VR platform, we swapped you and you did across evaluation on an app. Um and so what we found is that again, we got this tri modal distribution, I'm only going to talk about the moderate speed because that one is most reflective of kind of natural training speed. What, you know, kind of a participant would do fast was like kind of faster than you would like to do slower, were slower than you would like to do. Um And this is because of work that we're doing with dr um you know, in terms of trying to understand the speed, accuracy, uh function um um And so but by and large if you just look at one speed, what we see is that um in both platforms, so going from the red box whiskers uh to the green there was a significant decrease in error. So what you're looking at on the y axis here is the performance metric, which is essentially an integral. We basically took the total displacement of all three of those rings over the course of the experiment and just computed an integral of that over time. Um So this is the representative of the total displacement of all the rings. Um And so the lower the better because essentially ideal perfect would be no displacement of the rings, so it's zero. Um And so what you see here is that from red to green, participants on both platforms got significantly better. They lower their errors. But then when you go from the evaluation on one platform to the cross evaluation on the other platform, what we found is that in the inanimate um um the group which is the left hand plot here, you see there was not really a significant change from, you know, doing the evaluation on the inanimate platform and then doing the cross evaluation on the VR platforms. No significant change kind of an error. But then when we look at the right plot here, going from um evaluation on the VR platform to cross evaluation on the physical platform, we've got a significant increase in error. And so what we're starting to think about this is that something about doing the task in the physical environment grounds the skill a little bit differently than it does in the VR environment and allows a better skill transfer. Um so you can transfer the skill from, you know, you can transfer the skill better from physical to the VR world than you can from the VR world to the physical world. And so, you know, we still have much more data to analyze this case but you know, I think this is starting to put piece together a picture that you know, we've kind of been thinking about a lot and our own work is that you know, I kind of backed up this idea that the physical robot is somewhat the gold standard because there's something about grounding it and and at least when you're trying to develop this skill in the physical world um as opposed to grounding it in that they, you know, kind of learning these skills in the VR environment because if you learn them in the physical world you can transfer into the VR but if you learn it in the VR environment, it doesn't transfer as readily to activists. Okay. Um so now I'm going to add a third category here really quickly and just talk about some work that we've been doing um in fundamental haptic perception. So just trying to understand how we perceive the world to our sense of touch. Uh and this is some work that's being led by my graduate student mojito. Thank Allah. Uh so moe he has been working largely in this area around half of perception. Um and he came up with this idea um you know uh when we think about um you know perception with one hand or perception with two hands, you know, do we perceive the world differently when I'm interacting with one hand to hand? Right. R r two hands better than one? If I perceive the object with both hands, who does the percent that I develop of that object? Is it more robust than if I only felt it then? Let's say one. Right, so you know I kind of think about this, imagine I have this little tube and I ask you to fill this cube with your left hand, right? Is your perception of this cube different than if I asked you to perceive that same cube with your right? Likewise if I have this cube and I ask you to perceive it with both hands together? What is the perception of that cube now? Um And how does that differ from either of these, you know manual conditions? Um So what more he developed in the lab is a simple, you know uh single degree of freedom haptic interface um but very robust in terms of like what we can do with it. Um It just consists of a motoring encoder and this little hand fixture that has this kind of alternating finger pattern um You know to kind of make sure that what we're all the motion that we're getting is really you know um in this case combination combination um or wrist rotation. Um What we can do with this device as I mentioned um is that we can basically make it render any physical environment that has some inherent dynamics and mechanics. Right? So a spring for example if you follow a hook claw formulation um just says that part of the spring is equal to some constant K. Times the displacement on that plane. Um And so what we can do is you can see here is take that same displacement and produced by changing the variable K. A different torque output. So imagine here the kind of simple example is you've got a door knob and you're turning the doorknob and in between your turn to the door knob and basically taking the spring out and put a different spring in. So you get a different force or a different torque output as you turn the door knob, we can physically do that in in in the virtual environment just by changing this list are putting this parameter. Okay so for the same displacement, you basically get a different torque profile um from your school. Uh And so uh what more he did is he built this set up and he has a duplicate one. So you can actually do two hands now. Um And basically what it what happens now you can you can displace that frame. Um You know with one hand you can displace it with the other hand or you can displace it with both hands. And so that both hands are a little bit hard to kind of think about. But imagine now maybe let's step away from the rotational world and just go imagine that I gave you a big spring to hold in both hands, right? Um You can pull on that spring like this and you would feel the spring pulling both of your hands back together. Imagine it was just a rubber band, right? Um That rubber band would want to pull your hands back together. If you pull with this hand right, it would want to make, you know, you would still feel the force the resulting force in both hands as they try to pull them back together. Likewise, if you pull with this hand, it would try to pull them both back together, right? And that's essentially what we've done except in a rotational domain. Um If you rotate with this hand, you feel the resulting pork in this hand. If you rotate with this hand, you feel the resulting torque in this hand. Because we've mapped it now to the total displacement of both ends um Of of the spring. And so um we ran a psycho physics test um uh method of constant stimuli where we basically varied the change in the parameter. Okay, Because we were trying to estimate the just noticeable difference um in these three conditions. Left Union manual, right Union manual. And by manual, this is just one of the supplementary purchase from the representative participant here. Um And what we found largely was that now this is a majority right hand participant pool that actually their left hand was perceptually more sensitive, right? So even though these were largely right hand dominant participants, um I think we had only one participant in the pool that was left hand dominant. Um um They had lower J. And D. So higher perception, higher sensitivity with their left hand. Um And then by manual kind of fell somewhere in between left and right. Um We're still analyzing this result um you know, still as we prepare for uh you know, submission. Um um But you know, it is starting to piece together this idea that you know, yeah. There are these asymmetries perceptually between the different sides of the hand and how repeats the story back together by manually is very very complicated right? It's not just simple as. Oh well I only trust information from one hand or the other because there has been work that's been looked at in terms of visual haptic information and if you distort one channel you tend to rely on the other here, it doesn't seem to be this kind of optimal integration that we would expect this like you see in the visual haptic uh domain. Um And so we're still trying to work out um you know as engineers, non neuroscientists um you know kind of from behavioral studies, what's really going on in these by manual conditions. And we've got some other experiments lined up what we're trying to really tease apart and break down kind of in it into its constituent parts, something like stiffness and what that really means. Okay so that's kind of the last of all of the work that we've done where we've got short results to present. Um Everything else that I'm gonna talk about is kind of a you know kind of a post covid or in intra covid maybe era where we're still building systems and and starting to get some preliminary data um um with the exception of this one. I apologize. So remember I showed you the study that we talked about where it was comparing virtual reality training, inanimate training. Um Well um what I was showing you in that result that we uh that we presented was actually only the results of a shambles. Right? So um in that study guido actually ran an additional cohort where we applied transcranial direct current stimulation. Um and did the exact same protocol. Uh And so um we are in the midst and Gabriella, I actually just had a conversation with today and he's like I'm gonna start working on it right now because I think the whole eu is getting ready to go into a severe lockdown. So you've got a lot of time on his hands but analyzing essentially uh you know the differences we started and we will finish now analyzing the differences between the sham group and the stimulation group. Um And um you know in terms of what effect the T. D. C. S. Uh you know have on performance. Um So my student mochi and my student Theresa have also been working um with Gabriella. And and and everyone else has been working um you know with this collaborative effort between the hospital and a pl and buzz on more of a scientific experiment where we're looking at the sensory motor attenuation. Uh And so uh Teresa and moe he had put together this experimental setup which I'll see and you can see Buzz using it in the left hand figure here. Um where we're looking at you know um a self derived uh stimuli versus you know, stimuli that are not self derived, we derive them ourselves. Um And um what happens in terms of sensorimotor accumulation. So you can see this little setup here, you press your hand, There's a little force sensor that measures the force. Um And we basically this is essentially a tele robotic device you input for us and we output force without little stimulator. Um and we can actually measure it on the other side. Uh And so this is a simple setup that we're using now the random experiments to investigate uh sensory motor attenuation. Um And we get to do that alongside protocol, nice critical recordings. Uh because buzz has been implanted with these interpretive electric, my students surgery, Macaca. And my former student Garrett, I've been working on this experimental setup. So everything that I showed you kind of in the you know captains for tele robotic space has largely been single modality type of feedback. We measure force and we either produce that as like weaving or vibration, but it's still just one type of one piece of information. We know that like you know, you've got a suite of mechanical receptors embedded in your fingers, right? And you've got other century uh you know, receptors you know in the musculoskeletal apparatus. So we get a lot of haptic information all the time. And so we started thinking about, well can we start giving more than one type of haptic, you know piece of haptic information. So this simple study here um is looking at two modalities of haptic information. And what you see here is a simple experiment. All virtual environment right now. And when you have this virtual object and attached is a basically break the object and keep it from slipping. The difference though is that you see this little long pale in the object. What we do virtually is you can think of it as like taking a spring connected to this object and while you're holding it we pull down on it. Um so almost an analog to like the grasp and lift paradigm except here it's just grasp and hold. But we provide, we modulate essentially the lift or loading forces um on the object. And what we're doing is providing participants with um um happy feedback of their grip force which you can see on this risk, the risk cleaning device that we talked about before as well as information about incipient slipped. Um So if the object starts to slip, we basically map that the velocity of slip. Is this viable tactile actuator that they're wearing on their arm as well. So they've got two pieces of information that grip force and then uh slip indicator. And we want to see how well they do the task. And we have now looked, we looked at this from two kind of interaction paradigms. One it's kind of physically manipulating a gripper like you would find on the da Vinci and that's what this uh this little gripper on the bottom left is modeled after a da Vinci tower ripper and then the other is more of a more geared towards, you know a prostheses with the MG where you now have antagonist TMG electrodes that allow you to basically control the grippers like you would the presenters of the process. We don't have any actual data to show on this just yet. So at least show you a video of how the the setup is supposed to work. Oh and I should mention these objects are also brittle. So imagine now that the object is actually an egg where if you squeeze too hard you can break it. Which just happened right there. You'll notice we remove visual feedback again, we tend to be visually dominant. And so we wanted to see what would happen just based on just getting the happening information back a little bit contrived, but you know, um it allows us to test what is the utility of that happening. Um So this is some work the only data that I've collected um you know, kind of since the covid shutdown has been because I had a student named who's in Germany right now and explain on a Fulbright fellowship. Um and um you know, because Germany just handled things a little bit better than we did um they could run studies for the better part of the fall. Um So she's been working on some experiments um looking at combining haptic feedback, which we've been doing in our lab and for prostheses with um things that, you know, some of the advances that have been taking place in the industry. Um These automated grabs controllers, right for these controllers that basically if they sense the object slipping, they'll just gripped tighter. Um And so we're wondering kind of in this shared control strategy, what if you added haptics on top of this more intelligent type of control over your prosthesis? Could you get better performance out of out of your prophecy? But she has this um um is that they achieved developed in Germany, which is using the auto box center. Hand. I want to say um our our speed. I can't remember exactly what the model number is are the model is. But the task is fairly simple. It's grabbing an object out of one bin and placing it in the other band. The only difference is where they forced their participants not to look at that. So you have to stare straight at a wall. Um And basically used in this case you have to feed back. So what you have is um on one digit you have a contact location center. That basically will tell you where contact was made along the length of the finger. Um And then on the on the some in this case you have a pressure sensor that tells you how much pressure um you're generating and we're giving them half to feedback both pieces of information. Um You know, but you can kind of recognize where you're grasping the object and then also how hard you're squeezing the object. Um And then like I said with um we compare that to a case where you don't have any of this information. Um And then also with the happy feedback we added on this grass controller. So the object started to slip. The grass controller would kick in and make you know and and force the process is complete a little bit harder for the video that I'm gonna show you next is a little bit long. I apologize, I didn't really know the right place to determine. Um But you're gonna see four different um videos. One is gonna be the no feedback condition, a unsuccessful trial and a successful trial. And then you're going to see the hybrid condition with haptics and the automated stuff controller and unsuccessful, partially successful. Um and a successful trial. And then what you'll see is kind of this too a side by side video. Um One is the side view from the camera that's looking at the experiment. The other is actually we had that they had the participants where um um um eye tracking glasses. So you can kind of see where their focus is over the course of the experiment to kind of show that they are not paying attention to the task that happening down below. So now this is what the tactics turned on and the automated grip control, right? And this is the one where no feedback but the participant was still successful because we thought it was kind of worthwhile to see um that you can still be successful even without the feedback. And this is the final one with the feedback on and the automated controller woman. Um and this is business student is successful all the way. And so um you know, I don't have any data to really show you, but I can tell you kind of our results as far as show that um the benefit here that we're finding of haptics and the automated controller right now is that there it reduces the variability in performance. So participants just perform their more successful. Um um then they are, you know, you've got we've got some stand up. There are some stellar participants with no feedback who are able to do it and they all have these like, you know when you ask them what strategy they were using. There's these like kind of, you know, I don't even know what you would call them. But intricate strategies for trying to figure out where the object is and come up with some representation the object, you know, they have it. Whereas when you talk to the participants that we're using, that happens, you know, it was essentially I was relying on the feedback to tell me when I had localized. Right? And so this task seems a little bit brutal uh to put participants to write. But this task is representative of many situations where you can't directly look right? So if I asked you to reach in your pocket and pull out your keys, right, you don't have visual access to that, but it's something you can do, you know, seamlessly, right? Or I put a bunch of coins in your pocket and told you to accept, you know, give me a penny and a quarter, you know, or separate, you know, find a penny or find a quarter. You would easily be able to do that just kind of based on the size of the object without directly looking at it. So this is something we kind of take for granted, you know, that an amputee kind of when their current, you know, substantiation of prostheses wouldn't be able. And so we're trying to think about your ways to explore this. Imperial, Okay, some other practices work that we've been doing, this is some work um by my former student, Ethan Miller, uh and um uh Henry Rocchi, who was actually a high school student working in the lab. Um this is a newer practices that we developed um also um collaboration with Mark and the folks at Diekmeier, um where instead of being, you know, kind of a single motor um in the hand driving open and clothing, we decided to use to uh antagonistic tendons to do both flexion and extension of the hand. Um and then in addition to that we have this happy feedback system that's kind of embedded within the tough itself of the socket itself, which we're hoping to provide, you know, kind of real time feedback of what the tension is on the other side because we know that, you know, based on how our co contract my muscles, I generate different, you know, endpoint stiffness and impedance of the limbs itself. Uh And you know, it's been shown there's evidence to suggest that we modulate our limb, you know, impedance for various types of tasks. And so we're interested in exploring that uh you know, kind of possibility with the prosthesis. Uh and so um this this is kind of the first prototype of this, this device that we've worked on again, all of this happened right before Covid. So we haven't really done much of this since, but I can at least show you this video of how, you know, we we control it and then kind of functionally use it in like a simple boxing blocks tax and I'll skip this, which is long. Um really quickly, I just want to show you some work that we've been doing in collaboration uh with Jon Krakauer's group in the blame lab. And now jean hsu and her lab cortex lab at Georgia. I'm working on. This is some work by my student Jacob Carducci, um who really helped with a lot of the engineering around the hand device that john and jane have been using in some of their experiments. Um and now jake is working, taking the device and more he built. Um and we're starting to look at um you know, accepting sensory impairment, post stroke. Um you know, kind of using robotic meeting. Um uh and so kind of a lot of the work that we're doing in traffic perception and um you know, one of the basic fundamental stuff that we're doing have haptic perception. Trying to apply that to basically you're going to better understand sensory impairment after neurological disorder or disease. Um uh And then um last is some work by my student mojito who's doing um exploring this kind of a simple tele robotic interface. What we're trying to see what impact the dynamics of the tele robots have on the perception of the remote. Um It's I won't go too deep in that because I didn't give a nice primer for it. Um But basically it's this is a single degree of freedom tele operator testbed of sorts where we can modulate what are the dynamics between both sides of the tele operator without affecting um what happened on either end of the tele operator and and imagine what impact that. Okay, so with that I just stopped here. Um This is that was my last slide, acknowledged all the work of all the students in my lab who've done the lion's head is working. I didn't even get a chance to talk about everyone's work. Um you know, unfortunately. Um But I want to thank them for all their hard work and effort. Um And then if you wanted to learn more about the work that we're doing in our lab, you know here's my lab website. Um You can easily go to the website and um you know get in contact with us if you wanted to learn more. Um And yeah I'll just stop right there and take your questions. Created by Related Presenters