fbpx

Blog Page

Uncategorized

How Robots Can Help Us Act and Feel Younger – IEEE Spectrum

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Toyota’s Gill Pratt on enhancing independence in old age
By 2050, the global population aged 65 or more will be nearly double what it is today. The number of people over the age of 80 will triple, approaching half a billion. Supporting an aging population is a worldwide concern, but this demographic shift is especially pronounced in Japan, where more than a third of Japanese will be 65 or older by midcentury.
Toyota Research Institute (TRI), which was established by Toyota Motor Corp. in 2015 to explore autonomous cars, robotics, and “human amplification technologies,” has also been focusing a significant portion of its research on ways to help older people maintain their health, happiness, and independence as long as possible. While an important goal in itself, improving self-sufficiency for the elderly also reduces the amount of support they need from society more broadly. And without technological help, sustaining this population in an effective and dignified manner will grow increasingly difficult—first in Japan, but globally soon after.
A smiling man with a mustache and beard in a suit with a blue tieToyota Research Institute
Gill Pratt, Toyota’s Chief Scientist and the CEO of TRI, believes that robots have a significant role to play in assisting older people by solving physical problems as well as providing mental and emotional support. With a background in robotics research and five years as a program manager at the Defense Advanced Research Projects Agency, during which time he oversaw the DARPA Robotics Challenge in 2015, Pratt understands how difficult it can be to bring robots into the real world in a useful, responsible, and respectful way. In an interview earlier this year in Washington, D.C., with IEEE Spectrum’s Evan Ackerman, he said that the best approach to this problem is a human-centric one: “It’s not about the robot, it’s about people.”
What are the important problems that we can usefully and reliably solve with home robots in the relatively near term?
Gill Pratt: We are looking at the aging society as the No. 1 market driver of interest to us. Over the last few years, we’ve come to the realization that an aging society creates two problems. One is within the home for an older person who needs help, and the other is for the rest of society—for younger people who need to be more productive to support a greater number of older people. The dependency ratio is the fraction of the population that works relative to the fraction that does not. As an example, in Japan, in not too many years, it’s going to get pretty close to 1:1. And we haven’t seen that, ever.
Solving physical problems is the easier part of assisting an aging society. The bigger issue is actually loneliness. This doesn’t sound like a robotics thing, but it could be. Related to loneliness, the key issue is having purpose, and feeling that your life is still worthwhile.
What we want to do is build a time machine. Of course we can’t do that, that’s science fiction, but we want to be able to have a person say, “I wish I could be 10 years younger” and then have a robot effectively help them as much as possible to live that kind of life.
There are many different robotic approaches that could be useful to address the problems you’re describing. Where do you begin?
Pratt: Let me start with an example, and this is one we talk about all of the time because it helps us think: Imagine that we built a robot to help with cooking. Older people often have difficulty with cooking, right?
Well, one robotic idea is to just cook meals for the person. This idea can be tempting, because what could be better than a machine that does all the cooking? Most roboticists are young, and most roboticists have all these interesting, exciting, technical things to focus on. And they think, “Wouldn’t it be great if some machine made my meals for me and brought me food so I could get back to work?”
But for an older person, what they would truly find meaningful is still being able to cook, and still being able to have the sincere feeling of “I can still do this myself.” It’s the time-machine idea—helping them to feel that they can still do what they used to be able to do and still cook for their family and contribute to their well-being. So we’re trying to figure out right now how to build machines that have that effect—that help you to cook but don’t cook for you, because those are two different things.
A black and white two armed robot with a mobile base sweeps the floor of a living roomA robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Tidying and cleaning are physically repetitive tasks that are ideal for home robots, but still a challenge since every home is different, and every person expects their home to be organized and cleaned differently.Toyota Research Institute
How can we manage this temptation to focus on solving technical problems rather than more impactful ones?
Pratt: What we have learned is that you start with the human being, the user, and you say, “What do they need?” And even though all of us love gadgets and robots and motors and amplifiers and hands and arms and legs and stuff, just put that on the shelf for a moment and say: “Okay. I want to imagine that I’m a grandparent. I’m retired. It’s not quite as easy to get around as when I was younger. And mostly I’m alone.” How do we help that person have a truly better quality of life? And out of that will occasionally come places where robotic technology can help tremendously.
A second point of advice is to try not to look for your keys where the light is. There’s an old adage about a person who drops their keys on the street at night, and so they go look for them under a streetlight, rather than the place they dropped them. We have an unfortunate tendency in the robotics field—and I’ve done it too—to say, “Oh, I know some mathematics that I can use to solve this problem over here.” That’s where the light is. But unfortunately, the problem that actually needs to get solved is over there, in the dark. It’s important to resist the temptation to use robotics as a vehicle for only solving problems that are tractable.
It sounds like social robots could potentially address some of these needs. What do you think is the right role for social robots for elder care?
Pratt: For people who have advanced dementia, things can be really, really tough. There are a variety of robotic-like things or doll-like things that can help a person with dementia feel much more at ease and genuinely improve the quality of their life. They sometimes feel creepy to people who don’t have that disability, but I believe that they’re actually quite good, and that they can serve that role well.
There’s another huge part of the market, if you want to think about it in business terms, where many people’s lives can be tremendously improved even when they’re simply retired. Perhaps their spouse has died, they don’t have much to do, and they're lonely and depressed. Typically, many of them are not technologically adept the way that their kids or their grandkids are. And the truth is their kids and their grandkids are busy. And so what can we really do to help?
Here there’s a very interesting dilemma, which is that we want to build a social-assistive technology, but we don’t want to pretend that the robot is a person. We’ve found that people will anthropomorphize a social machine, which shouldn’t be a surprise, but it’s very important to not cross a line where we are actively trying to promote the idea that this machine is actually real—that it’s a human being, or like a human being.
So there are a whole lot of things that we can do. The field is just beginning, and much of the improvement to people's lives can happen within the next 5 to 10 years. In the social robotics space, we can use robots to help connect lonely people with their kids, their grandkids, and their friends. We think this is a huge, untapped potential.
A black and white two armed robot grasps a glass in a kitchenA robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Perceiving and grasping transparent objects like drinking glasses is a particularly difficult task.Toyota Research Institute
Where do you draw the line with the amount of connection that you try to make between a human and a machine?
Pratt: We don’t want to trick anybody. We should be very ethically stringent, I think, to not try to fool anyone. People will fool themselves plenty—we don't have to do it for them.
To whatever extent that we can say, “This is your mechanized personal assistant,” that’s okay. It’s a machine, and it’s here to help you in a personalized way. It will learn what you like. It will learn what you don’t like. It will help you by reminding you to exercise, to call your kids, to call your friends, to get in touch with the doctor, all of those things that it's easy for people to miss on their own. With these sorts of socially assistive technologies, that’s the way to think of it. It’s not taking the place of other people. It’s helping you to be more connected with other people, and to live a healthier life because of that.
How much do you think humans should be in the loop with consumer robotic systems? Where might it be most useful?
Pratt: We should be reluctant to do person-behind-the-curtain stuff, although from a business point of view, we absolutely are going to need that. For example, say there's a human in an automated vehicle that comes to a double-parked car, and the automated vehicle doesn’t want to go around by crossing the double yellow line. Of course the vehicle should phone home and say, “I need an exception to cross the double yellow line.” A human being, for all kinds of reasons, should be the one to decide whether it’s okay to do the human part of driving, which is to make an exception and not follow the rules in this particular case.
However, having the human actually drive the car from a distance assumes that the communication link between the two of them is so reliable it’s as if the person is in the driver’s seat. Or, it assumes that the competence of the car to avoid a crash is so good that even if that communications link went down, the car would never crash. And those are both very, very hard things to do. So human beings that are remote, that perform a supervisory function, that’s fine. But I think that we have to be careful not to fool the public by making them think that nobody is in that front seat of the car, when there’s still a human driving—we’ve just moved that person to a place you can’t see.
In the robotics field, many people have spoken about this idea that we’ll have a machine to clean our house operated by a person in some part of the world where it would be good to create jobs. I think pragmatically it’s actually difficult to do this. And I would hope that the kinds of jobs we create are better than sitting at a desk and guiding a cleaning machine in someone’s house halfway around the world. It’s certainly not as physically taxing as having to be there and do the work, but I would hope that the cleaning robot would be good enough to clean the house by itself almost all the time and just occasionally when it’s stuck say, “Oh, I’m stuck, and I’m not sure what to do.” And then the human can help. The reason we want this technology is to improve quality of life, including for the people who are the supervisors of the machine. I don’t want to just shift work from one place to the other.
A two finger robotic gripper with soft white pliable gripping surfaces picks up a blue cylinderThese bubble grippers are soft to the touch, making them safe for humans to interact with, but they also include the necessary sensing to be able to grasp and identify a wide variety of objects.Toyota Research Institute
Can you give an example of a specific technology that TRI is working on that could benefit the elderly?
Pratt: There are many examples. Let me pick one that is very tangible: the Punyo project.
In order to truly help elderly people live as if they are younger, robots not only need to be safe, they also need to be strong and gentle, able to sense and react to both expected and unexpected contacts and disturbances the way a human would. And of course, if robots are to make a difference in quality of life for many people, they must also be affordable.
Compliant actuation, where the robot senses physical contact and reacts with flexibility, can get us part way there. To get the rest of the way, we have developed instrumented, functional, low-cost compliant surfaces that are soft to the touch. We started with bubble grippers that have high-resolution tactile sensing for hands, and we are now adding compliant surfaces to all other parts of the robot's body to replace rigid metal or plastic. Our hope is to enable robot hardware to have the strength, gentleness, and physical awareness of the most able human assistant, and to be affordable by large numbers of elderly or disabled people.
What do you think the next DARPA challenge for robotics should be?
Pratt: Wow. I don’t know! But I can tell you what ours is [at TRI]. We have a challenge that we give ourselves right now in the grocery store. This doesn't mean we want to build a machine that does grocery shopping, but we think that trying to handle all of the difficult things that go on when you’re in the grocery store—picking things up even though there’s something right next to it, figuring out what the thing is even if the label that’s on it is half torn, putting it in the basket—this is a challenge task that will develop the same kind of capabilities we need for many other things within the home. We were looking for a task that didn’t require us to ask for 1,000 people to let us into their homes, and it turns out that the grocery store is a pretty good one. We have a hard time helping people to understand that it’s not about the store, it’s actually about the capabilities that let you work in the store, and that we believe will translate to a whole bunch of other things. So that’s the sort of stuff that we're doing work on.
As you’ve gone through your career from academia to DARPA and now TRI, how has your perspective on robotics changed?
Pratt: I think I’ve learned that lesson that I was telling you about before—I understand much more now that it’s not about the robot, it’s about people. And ultimately, taking this user-centered design point of view is easy to talk about, but it’s really hard to do.
As technologists, the reason we went into this field is that we love technology. I can sit and design things on a piece of paper and feel great about it, and yet I’m never thinking about who it is actually going to be for, and what am I trying to solve. So that’s a form of looking for your keys where the light is.
The hard thing to do is to search where it’s dark, and where it doesn’t feel so good, and where you actually say, “Let me first of all talk to a lot of people who are going to be the users of this product and understand what their needs are. Let me not fall into the trap of asking them what they want and trying to build that because that’s not the right answer.” So what I’ve learned most of all is the need to put myself in the user’s shoes, and to really think about it from that point of view.
Gird yourself for muscle shirts that twitch
The UNSW team’s smart textile enables fabric reconfiguration that can produce shape-morphing structures, such as this butterfly and flower, which can move using hydraulics.
Recent advances in soft robotics have opened up possibilities for the construction of smart fibers and textiles that have a variety of mechanical, therapeutic, and wearable possibilities. These fabrics, when programmed to expand or contract through thermal, electric, fluid, or other stimuli, can produce motion, deformation, or force for different functions.

Engineers at the University of New South Wales (UNSW), Sydney, Australia, have developed a new class of fluid-driven smart textiles that can “shape-shift” into 3D structures. Despite recent advances in the development of active textiles, “they are either limited with slow response times due to the requirement of heating and cooling, or difficult to knit, braid, or weave in the case of fluid-driven textiles,” says Thanh Nho Do, senior lecturer at the UNSW’s Graduate School of Biomedical Engineering, who led the study.
To overcome these drawbacks, the UNSW team demonstrated a proof of concept of miniature, fast-responding artificial muscles made up of long, fluid-filled silicone tubes that can be manipulated through hydraulic pressure. The silicone tube is surrounded by an outer helical coil as a constraint layer to keep it from expanding like a balloon. Due to the constraint of the outer layer, only axial elongation is possible, giving muscle the ability to expand under increased hydraulic pressure or contract when pressure is decreased. Using this mechanism, says Do, they can program a wide range of motion by changing the hydraulic pressure.
“A unique feature of our soft muscles compared to others is that we can tune their generated force by varying the stretch ratio of the inner silicone tube at the time they are fabricated, which provides high flexibility for use in specific applications,” Do says.
The researchers used a simple, low-cost fabrication technique, in which a long, thin silicone tube is directly inserted into a hollow microcoil to produce the artificial muscles, with a diameter ranging from a few hundred micrometers to several millimeters. “With this method, we could mass-produce soft artificial muscles at any scale and size—diameter could be down to 0.5 millimeters, and length at least 5 meters,” Do says.
The filament structure of the muscles allows them to be stored in spools and cut to meet specific length requirements. The team used two methods to create smart fibers from the artificial muscles. One was using them as active yarns to braid, weave, or knit into active fabrics using traditional textile-making technologies. The other was by integrating them directly into conventional, passive fabrics.
The combination of hydraulic pressure, fast response times, light weight, small size, and high flexibility makes the UNSW’s smart textiles versatile and programmable. According to Do, the expansion and contraction of their active fabrics is similar to those of human muscle fibers.
This versatility opens up potential applications in soft robotics, including shape-shifting structures, biomimicking soft robots, locomotion robots, and smart garments. There are possibilities for use as medical/therapeutic wearables, as assistive devices for those needing help with movement, and as soft robots to aid the rescue and recovery of people trapped in confined spaces.
Although these artificial muscles are still a proof of concept, Do is optimistic about commercialization in the near future. “We have a Patent Cooperation Treaty application around these technologies,” he says. “We are also working on clinical validation of our technology in collaborations with local clinicians, including smart compression garments, wearable assistive devices, and soft haptic interfaces.”
Meanwhile, the research team continues to work on improvements. “We have currently achieved an outer diameter of 0.5 mm, which we believe is still large compared to the human muscle fibers,” says Do. “[So] one of the main challenges of our technology is how to scale the muscle to a smaller size, let’s say less than 0.1 mm in diameter.”
Another challenge, he adds, relates to the hydraulic source of power, which requires electric wires to connect and drive the muscles. “Our team is working on the integration of a new soft, miniature pump and wireless communication modules that will enable untethered driving systems to make it a smaller and more compact device.”
Analytical modeling for bending actuators is yet another area of improvement. Concomitant studies to demonstrate the feasibility of machine-made smart textiles and washable smart textiles in the smart garment industry are also necessary, the researchers say, as are further studies regarding incorporating functional components into smart textiles to provide additional benefits.
The Manchester “Baby” was the first electronic digital computer to store a program
Joanna Goodrich is the assistant editor of The Institute, covering the work and accomplishments of IEEE members and IEEE and technology-related events. She has a master's degree in health communications from Rutgers University, in New Brunswick, N.J.
A replica of the Manchester “Baby” computer at the Museum of Science and Industry in Chicago.
Whether you’re streaming a movie on Netflix, playing a video game, or just looking at digital photos, your computer is regularly dipping into its memory for instructions. Without random-access memory, a computer today can’t even boot up.
Over the years, memory has been made up of vacuum tubes, glass tubes filled with mercury and, most recently, semiconductors.
But the first computers didn’t have any reprogrammable memory at all. Until the late 1940s, every time a machine needed to change tasks, it had to be physically reprogrammed and rewired, according to the Science and Industry Museum in Manchester, England.
The first electronic digital computer capable of storing instructions and data in a read/write memory was the Manchester Small Scale Experimental Machine, known as the Manchester “Baby.” It successfully ran a program from memory in June 1948.
Computing pioneers Frederic C. Williams, Tom Kilburn, and Geoff Tootill developed and built the machine and its storage system—the Williams-Kilburn tube—at the University of Manchester.
“The Baby was very limited in what it could do, but it was the first-ever real-life demonstration of electronic stored-program computing, the fast and flexible approach used in nearly all computers today,” said James Sumner, a lecturer on the history of technology at the University of Manchester, in an interview with theManchester Evening News.
The IEEE commemorated Baby as an IEEE Milestone during a ceremony held on 21 June at the university.
How Baby Came to Remember
After World War II, research groups around the world began investigating ways to build computers that could perform multiple tasks from memory. One such researcher was British engineer F.C. Williams, a radar pioneer who worked at the Telecommunications Research Establishment (TRE), in Malvern, England.
Williams had an impressive background in radar systems and electronics research. He helped develop the “identification: friend or foe” system, which used radar pulses to distinguish Allied aircraft during the war.
Because of his expertise, in 1945 the TRE tasked Williams with editing and contributing content to a series of books on radar techniques. As part of his research, he traveled to Bell Labs in Murray Hill, N.J., to learn about work being done to remove ground echoes from the radar traces on CRTs. Williams came up with the idea of using two CRTs, and storing the radar trace by passing it back and forth between the two. Williams returned to the TRE and began to investigate the idea, realizing that the approach also could be used to store digital data, with just one CRT. Kilburn, a scientific officer at the TRE, joined Williams in his research.
“The Baby was the first-ever real-life demonstration of electronic stored-program computing, and the fast and flexible approach is used in nearly all computers today.”
A CRT uses an electron gun to send a focused beam of electrons toward a phosphor-laden screen. The phosphors glow where the beam strikes; the glow eventually fades until struck again by the electron beam. To store digital data, Williams and Kilburn used a more powerful electron beam. When it hit the screen, it knocked a few electrons aside, briefly creating a positively charged spot surrounded by a negative halo. Reading the data involved writing to each data spot on that plate and decoding the pattern of current generated in a nearby metal plate—which would depend on whether there was something written at that spot previously.
It turned out that the electron charges leaked away over time (just as phosphors on a TV screen fade) and didn’t allow the tube to keep storing data, according to an entry about the Milestone on the Engineering and Technology History Wiki. To maintain the charge, the electron beam had to repeatedly read the data stored on the phosphor and regenerate the associated charge pattern. Such refreshing is also used in the DRAM present in today’s computers.
In 1946, the men demonstrated a device that could store 1 bit. It is now called the Williams-Kilburn tube; sometimes just the Williams tube.
Also in 1946, Williams joined the University of Manchester as chair of its electrotechnology department. The TRE temporarily assigned Kilburn to work with him there, and the two continued their research at the university’s Computing Machine Laboratory. A year later Williams recruited computer scientist Tootill to join the team. And in 1947 they successfully stored 2,048 bits using a Williams-Kilburn tube.
Building the Prototype
To test the reliability of the Williams-Kilburn tube, in 1948 Tilburn and Tootill, with guidance from Computing Machine Laboratory founder Max Newman and computer scientist Alan Turing, built a small-scale experimental machine. It took them six months, using surplus parts from WWII-era code-breaking machines. And the Manchester Baby was born.
The Baby took up an entire room in the laboratory building. It was 5 meters long, 2 meters tall, and weighed almost a tonne. The computer consisted of metal racks, hundreds of valves and vacuum tubes, and a panel of vertically mounted hand-operated switches. Users entered programs into memory, bit by bit, via the switches, and read the output directly off the face of the Williams-Kilburn tube.
On 21 June 1948, Baby ran its first program. Written by Kilburn to find the highest factor of an integer, it consisted of 17 instructions. The machine ran through 3.5 million calculations in 53 minutes before getting the correct answer.
By 1953, 17 pioneering computer design groups worldwide had adopted the Williams-Kilburn RAM technology.
Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Manchester Section sponsored the nomination for the Baby. The Baby Milestone plaque, which is to be displayed outside on the Coupland 1 building at the University of Manchester, reads:
At this site on 21 June 1948 the ‘Baby’ became the first computer to execute a program stored in addressable read-write electronic memory. ‘Baby’ validated Williams-Kilburn tube random-access memories, later widely used, and led to the 1949 Manchester Mark I which pioneered index registers. In February 1951, Ferranti Ltd.’s commercial derivative became the first electronic computer marketed as a standard product delivered to a customer.”
Team from SICK’s TiM$10K Challenge creates system to automate road maintenance
Developed by a team of students at Worcester Polytechnic Institute as part of SICK's TiM$10K Challenge, their ROADGNAR system uses LiDAR to collect detailed data on the surface of a roadway.
This is a sponsored article brought to you by SICK Inc.
From advanced manufacturing to automated vehicles, engineers are using LiDAR to change the world as we know it. For the second year, students from across the country submitted projects to SICK's annual TiM$10K Challenge.
The first place team during the 2020 TiM$10K Challenge hails from Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team comprised of undergraduate seniors, Daniel Pelaez and Noah Budris, and undergraduate junior, Noah Parker.
With the help of their academic advisor, Dr. Alexander Wyglinski, Professor of Electrical Engineering and Robotics Engineering at WPI, the team took first place in the 2020 TiM$10K Challenge with their project titled ROADGNAR, a mobile and autonomous pavement quality data collection system.
In this challenge, SICK reached out to universities across the nation that were looking to support innovation and student achievement in automation and technology. Participating teams were supplied with a SICK 270° LiDAR, a TiM, and accessories. They were challenged to solve a problem, create a solution, and bring a new application that utilizes the SICK scanner in any industry.
Around the United States, many of the nation's roadways are in poor condition, most often from potholes and cracks in the pavement, which can make driving difficult. Many local governments agree that infrastructure is in need of repair, but with a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, this is a difficult problem to solve.
Pelaez, Parker, and Budris first came up with the idea of ROADGNAR before they had even learned of the TiM$10K Challenge. They noticed that the roads in their New England area were in poor condition, and wanted to see if there was a way to help solve the way road maintenance is performed.
In their research, they learned that many local governments use outdated and manual processes. Many send out workers to check for poor road conditions, who then log the information in notebooks.
The team began working on a solution to help solve this problem. It was at a career fair that Pelaez met a SICK representative, who encouraged him to apply to the TiM$10K Challenge.
SICK is excited to announce the 2022-2023 edition of the SICK TiM$10K Challenge. Twenty teams will be selected to participate in the challenge, and the chosen teams will be supplied with a 270º SICK LiDAR sensor (TiM) and accessories. The teams will be challenged to solve a problem, create a solution, bring a new application that utilizes the SICK LiDAR in any industry. This can be part of the curriculum of a senior design project or capstone projects for students.
Awards:
The 3 winning teams will win a cash award of
• 1st Place – $10K
• 2nd Place – $5K
• 3rd place – $3K
In addition to bragging rights and the cash prize, the 1st place winning team, along with the advising professor, will be offered an all-expenses-paid trip to SICK Germany to visit the SICK headquarters and manufacturing facility!
Registration is now open for the academic year 2022-2023!
Using SICK's LiDAR technology, the ROADGNAR takes a 3D scan of the road and the data is then used to determine the exact level of repair needed.

ROADGNAR collects detailed data on the surface of any roadway, while still allowing for easy integration onto any vehicle. With this automated system, road maintenance can become a faster, more reliable, and more efficient process for towns and cities around the country.
ROADGNAR solves this problem through two avenues: hardware and software. The team designed two mounting brackets to connect the system to a vehicle. The first, located in the back of the vehicle, supports a LiDAR scanner. The second is fixed in line with the vehicle's axle and supports a wheel encoder, which is wired to the fuse box.
"It definitely took us a while to figure out a way to power ROADGNAR so we wouldn't have to worry about it shutting off while the car was in motion," said Parker.
Also wired to the fuse box is a GPS module within the vehicle itself. Data transfer wires are attached to these three systems and connected to a central processing unit within the vehicle.
When the car is started, all connected devices turn on. The LiDAR scanner collects road surface data, the wheel encoder tracks an accurate measurement of the distance travelled by the vehicle, and the GPS generates geo-tags on a constant basis. All this data is stored in the onboard database, where a monitor presents it all to the user. The data is then stored in a hard drive.
Much like the roads in their Massachusetts town, the creation process of ROADGNAR was not without its challenges. The biggest problem took the form of the COVID-19 pandemic, which hit the ROADGNAR team in the middle of development. Once WPI closed to encourage its students and faculty to practice social distancing, the team was without a base of operations.
"When the coronavirus closed our school, we were lucky enough to live pretty close to each other," said Paleaz. "We took precautions, but were able to come together to test and power through to finish our project."
The WPI students first came up with the idea of ROADGNAR when they noticed that the roads in their New England area were in poor condition, and wanted to see if there was a way to help improve road maintenance.
SICK
Integrating LiDAR into the car was also a challenge. Occasionally, the LiDAR would shut off when the car began moving. The team had to take several measures to keep the sensor online, often contacting SICK's help center for instruction.
"One of the major challenges was making sure we were getting enough data on a given road surface," said Budris. "At first we were worried that we wouldn't get enough data from the sensor to make ROADGNAR feasible, but we figured that if we drove at a slow and constant rate, we'd be able to get accurate scans."
With the challenge complete, Pelaez, Budris, and Parker are looking to turn ROADGNAR into a genuine product. They have already contacted an experienced business partner to help them determine their next steps.
The WPI students hope to turn their ROADGNAR system into a genuine product. They have already contacted a business partner to help them determine their next steps.
SICK
They are now interviewing with representatives from various Department of Public Works throughout Massachusetts and Connecticut. Thirteen municipalities have indicated that they would be extremely interested in utilizing ROADGNAR, as it would drastically reduce the time needed to assess all the roads in the area. The trio is excited to see how different LiDAR sensors can help refine ROADGNAR into a viable product.
"We'd like to keep the connection going," explained Pelaez. "If we can keep the door open for a potential partnership between us and SICK, that'd be great."
SICK is now accepting entries for the TiM$10K Challenge for the 2022-2023 school year!
Student teams are encouraged to use their creativity and technical knowledge to incorporate the SICK LiDAR for any industry in any application. Advisors/professors are allowed to guide the student teams as required.

source

× How can I help you?