Guest
Jivko Sinapov is an Assistant Professor at the Department of Computer Science at Tufts University. He joined the department in the Fall of 2017 and directs the Multimodal Learning, Interaction, and Perception Lab.
His main research interests lie in Cognitive and Developmental Robotics, Human-Robot Interaction, Robot Learning, and Computational Perception. The question that interests him most is, ``What is intelligence and how can it be implemented in a physical robot?''. He is passionate about enabling robots to operate in and learn about human-inhabited environments for extended periods of time.
In the Spring of 2019, Jivko led a team that won the Verizon 100K 5G EdTech Challenge with a proposal focused on investigating how Augmented Reality interfaces can enhance robotics education.
In the Spring of 2017, he co-organized a AAAI Spring Symposium titled Interactive Multi-Sensory Perception for Embodied Agents. In the Fall of 2019, he co-organized a AAAI Fall Symposium on AI for Human-Robot Interaction.
Transcript
Elisa Muñoz: Could you start by giving a little introduction about yourself?
Jivko Sinapov: Hi everyone. I'm Jivko Sinapov. I'm an assistant professor at Tufts and I lead the multi-modal interaction, learning and perception laboratory here at Tufts. I'm very focused on how robots can use multiple sensory modalities and interaction with LEDs, so they can be better partners to humans in our environments.
Elisa Muñoz: What was your first approach with robotics? Because I noticed that you're not an engineer.
Jivko Sinapov: Ah, that's a very good question. You're right. I'm not, I don't have professional engineering training. Other than robotics, you have to do some engineering. You have to build things. You have to put things together, both in terms of hardware, in terms of software and in terms of experimentation. So everything I know about engineering, I know by failing and by doing it the wrong way the first time. But most of my training is for my science and computer science specifically, where we essentially tried to investigate for me that one of the most important questions is, you know, what is intelligence? And can it be physically implemented in a device like a robot that can operate smartly in our environments?
Even though philosophers don't agree, brain scientists don't agree. So yeah. What is intelligence? How do we start trying to even think about the question of what is intelligence? Luckily I'm not a, I'm not a philosopher and I'm not a psychologist. So I kind of just look at their work and look at brain and cognitive science work to try to figure out what are some key properties of intelligent organisms in terms of adaptation and learning, how do they develop or the development of the biological development of an organism and how can we maybe mimic them in, in robots to some degree.
Elisa Muñoz: If I were to ask you what intelligence is?, In one sentence?
Jivko Sinapov: In one sentence, make sense of the world around you and adapt to it as needed. So the ability to make sense of things and adapt to them as things change. So right now we do have no machine learning and a lot of our technologists have developed very powerful methods that, for example, may enable robots or drones to recognize things in the environment, but they can not adapt very well. And the environment changes when something new happens or when something happens, when something happens that the engineer did not foresee, technically our systems still tend to be pretty, quite done. They may be quite smart in the areas where they're designed for in all other areas. They become very dumb very quickly.
Elisa Muñoz: How often do you feel like you receive new information when it comes to robotics?
Jivko Sinapov: New information probably everyday, everyday something there's always something you have to find it. If I want to read a newspaper article on your research article about something relevant to our research, I can find one every day. There's probably way too many for me to, one of the things that you do like to do, especially before COVID was go to conferences. So the primary, the primary venue where people would exchange ideas and talk about their work and give presentations where research conferences in robotics. And there were usually several a year that I went to. And the good news is that finally they're starting to get back in person as well, just because online, online, it was nice. It was to meet people online, but it was just not the same as in-person.
I mean, networking over zoom is actually surprisingly difficult, especially for PhD students who are just starting. So I remember when I was a PhD student and I got my first few papers and conferences I'd go there and then I didn't know, hang out and have drinks at the end and meet everybody, become Facebook friends, build up your network around you. You can't really do the same thing about zoom. So I'm really happy about this summer is that I finally get to take my students to a, to a, sort of a real conference experience where hopefully they have just as much fun as I did when I was, when I was in Decker stage.
Elisa Muñoz: Can you please explain how robots relate to psychology? It was something about behavior or, you know, something like that, that I didn't quite understand. So how is a robot, you know, related to behavior and how is that related to human psychology?
Jivko Sinapov: Very, very good question. So there are several ways to think about it. So someone who may be a neuroscientist may look at well, let's try to build a, not an artificial neural network. That's in some way, similar to the neural networks we see in nature in practice, I would say artificial neural networks. So people have heard the terms deep learning as two, very different for the mentally different, from what, from what, from our, from how our brains are organized. On the other hand, psychologists sometimes provide insight into the mechanisms by which we learn. So one of the most famous psychologists in history is [inaudible] who studied child psychology and the child developmental stages. Of course, he's wrong on many accounts because he did his work quite a long time ago.
Elisa Muñoz: How can you make them identify objects? Like, you know, how do you make him understand that?
Jivko Sinapov: How do we make robots recognize things?
So robots are, if you said they're very complicated and there's literally really a job for everybody. And that's why oftentimes our students who work in the labs are a combination of mechanical engineering, computer science, electrical, and computer engineering, because everyone can contribute something for the question of recognizing objects in an image. So people have, even since the eighties, there's been a variety of solutions. A lot of them are current ones that work with neural networks. So you don't have to move to try to identify trains on very large corpuses, very large data sets. Imagine you have, you know, a million images of cats and million images of dogs, and you provide that for the labor on your own network.
You can train it to recognize the new image, is it a cat or is it a dog? Now the challenge with a robot of course, is that, you know, we can obviously try to find data sets from the computer vision community, but the robot itself may sometimes need to say, well, what is this new object? I don't know what it is. I have to go ask a human, you know, have the human, tell the robot, you know, what is this object that can really learn things through interacting with humans. So that's one of the, some of the research that we have done, especially when I did it as a postdoc at UT Austin was if our robots is a new object, it sees it as an object, but it doesn't know what it is.
We can ask a human. Now give me some words that describe this object. And when I may say, well, it's a teal colored, it's a coffee cup, it's full, which I can only tell it's full because I can not have to because I can feel the forces. So there's a lot of multi-sensory perception that comes in here. And so a lot of our work is focused on how we can, how we can get humans to teach robots to essentially recognize the language about objects in their environment, which is really, really amazing in my opinion.
Elisa Muñoz: And let's talk about experiences, what has been the biggest technical challenge you have ever solved when it comes to building a robot?
Jivko Sinapov: Ah, the biggest technical challenge… So again, I wouldn't take all the credit for this because there's no equity the credit for teamwork by myself, but I'll say some of the most impressive thing that we have done in the past was when I was at UT Austin as a postdoc, we had robots that could essentially drive around in human environments and not kill anybody, be able to go essentially localize their environment and map their environment and be able to perform simple tasks like object delivery or delivering messages or guidance. So this was a big project at UT Austin.
And I would say in terms of a technical, especially an engineering accomplishment in terms of actually building a system and having it work in a, in the physical world, as opposed to a virtual world on a computer, that was probably the most impressive thing. And it was kind of the hardest thing because, you know, it's like driving, imagine driverless cars now, driverless cars, of course, it's difficult to make cars drive autonomously, but at least the lanes, at least there's some rules that, you know, other people, other cars will follow. But if you're driving around a segway in a, let's say a big convention where people are just going all over the place, crossing each other's trajectories, there's no rules.
Elisa Muñoz: When do you think that we can totally implement robots into our lives?
Jivko Sinapov: Ah, so fully integrate robots into our lives. It's hard for me to answer that question because of course we, all, some of us want it to happen sooner than later, but of course, many people have made these kinds of predictions of when. And if I look at that, when I was a kid reading predictions by 2020, by 2020, it was supposed to have happened. So I don't, I think we also have to be careful that perhaps full integration is not exactly what you are aiming for. You know, there may be many reasons why you may want a robot in your house to help you with things, but there are many reasons why in some situations you may not necessarily want a robot in your house. Yeah.
I think the integration would be a little bit more gradual in the sense that we're getting, we're going to get more and more robots that are a little bit more versatile. We can do a little bit more than just one thing. Many of us have a Roomba or robot vacuum in our house and he does great things, but it always can do. I think the next step or the next leap forward will be when we have a robot in the house that can do more than one thing. And that also can get out of the way when we don't want it to be there, because I think that's also important for humans in terms of, from privacy and other ethical standpoints, to make sure that we, we don't necessarily want the robot to be always, always there, always on always watching you for some people, perhaps that's not, that's not okay. I think some of the primary app, some of the most important applications, we're looking at another application where we think that integration is happening is people with disabilities.
Elisa Muñoz: Any last advice that you can give to our audience about robotics, about your personal experience, any helpful advice that our audience loves to hear?
Jivko Sinapov: Keep trying, even if you fail because we will fail. And I've certainly found that many things, many things did not work. In fact, most things you try when you're doing engineering and science do not work. That's in terms of doing the science in terms of trying to get into a career in those fields. I know sometimes it feels like it's a little rough, but it has become, it has become a lot better than it didn't used to be. So again, keep trying, regardless of how much you fail, keep trying, and eventually you will discover what, what drives you and what makes you, what makes you happy professionally?
So it's kind of a lucky position to be in where I'm doing exactly what I would want to be doing, which I know it's not, it's hard, it's hard, it's hard to get these days, but in science and engineering it's possible.
Elisa Muñoz: Thank you so much for being here. We were super lucky to have you.
This interview was brought to you by ControlHub, the most intuitive purchasing software to request, approve and track all your business purchases.