NSF Smart and Autonomous Systems (S&AS) Program

On October 25, 2016, I received an email from NSF that notified me a new NSF program called Smart and Autonomous Systems (S&AS). It is another robotics-related program after National Robotics Initiative (NRI) program. Before NRI, robotics people were only able to send their robotics proposals in Robust Intelligence (RI) program under CISE-IIS division.

The new S&AS program is good news for robotics society. It reflects the US national interests in robotics and artificial intelligence. Officially, the program focuses on Intelligent Physical Systems (IPS). IPS is a new term that covers any physical systems that are intelligent. Here the emphasis is on “intelligent” and “physical.” For example, AlphaGo is not a physical system. So, proposing a deep learning online gaming system would probably not get funded. Proposing a novel robotic actuator would not fit in this program either. It agrees with my push of the idea “bringing AI into the physical world” at the beginning of this year.

Different from NRI, S&AS would prefer long-term autonomy with little or no human operator intervention. So, IPS would emphasize more on robotic platforms and networked systems that combine intelligence, cognizance, perception, actuation, and communication.

There could be many different types of IPS: cognizant IPS, taskable IPS, reflective IPS, knowledge-rich IPS, ethical IPS, and others. The program officers behind this program gave more concrete definitions of the IPSs.

Cognizant IPS would have high-level awareness in long-term autonomy. The high-level awareness not only includes awareness of their own capabilities and limitations, but also can predict possible failures from the capabilities and limitations. The IPS would need to know when things go wrong and come up with contingency plans to prevent failure all by itself. So the IPS could run autonomously over a very long time safely. If the nuclear power plant at Fukushima were a cognizant IPS, the devastating Fukushima Daiichi nuclear disaster could be avoided. A cognizant IPS would be as transparent as possible so that people working around the IPS could easily understand what the system is up to. It would be better if the IPS could quickly get the help it needs. So the IPS should be able to communicate with people and explain the situation efficiently.

Knowledge-Rich IPS is not only a Wikipedia. It should be able to use all the content on the Internet to learn and reason. First the knowledge on the Internet should be represented in a way that a IPS could use. The representation could be symbolic, ontological, probabilistic, or a mixture. Then the well-represented knowledge can be reasoned to generate useful guidance and decisions, and help in learning. For example, a knowledge-rich IPS could learn cooking from youtube by first convert cooking videos to a functional object-oriented network, and then use the network to learn and reason how to cook meals.

Taskable IPS should have be able to take abstract level of commands from us and understand them and figure out what should be done and how to do it. For example, a taskable IPS should be able to take “I want beef stew for dinner” from a person and figure out: what kind of beef stew the person really wants; what should be in it; if not all the raw materials are available in the fridge, what would be the backup plan; and how to make it. The the IPS should use its sensors and actuators to find all the materials, pick them out of the fridge, cut the beef to small pieces, chop other materials, put them in a pan, pour the right amount of water in the pan, put in salt and other ingredients, and then start the fire. After it is ready, the IPS should go prepare the dining table and let the person know that the dinner is ready. After diner, the IPS should cleanup and wash all the dishes.
So the IPS should understand natural languages perfectly, reason from vague command and generate an executable plan thoughtfully, recognize objects in a cluttered environment accurately, perceive 3D surrounding precisely, perform different grasping and manipulation motions smoothly.

Reflective IPS should be able to self-maintain and self-evolve. It should monitor its behavior, sense and diagnose problems, adapt its configurations to new situations, and maintain and repair itself when needed. It is a very important feature for a Mars rover that navigates alone on Mars for many years without a support and rescue team. IPS should consistently evolve and get better. It can learn and improve over time by exploring different solutions and accumulating knowledge, trial and error, learning from observing other IPS or humans or environment changes, and incorporating instructions or corrections. For example, a reflective IPS can learn and evolve as a five-year-old child does.

Ethical IPS should not harm human race. But in more settle way, a IPS should be a good citizen and obtain the current ethical rules and legal rules. An ethical IPS doesn’t need to be a lawyer. But it should be able to make sense of the ethical and legal rules using common sense and understand the consequences of its action. So an ethical IPS should avoid behaviors prohibited by the law or frowned upon from an ethical perspective. A good ethical IPS should be able to reason around deadlocks and solve conflicts caused by multiple rules. A self-driving car would certainly need to make many ethical decisions every day. To avoid fatally injury a jaywalk pedestrian, the car may have to crash into a wall which may harm the passenger in the car. Should the car hit the pedestrian or run into the wall and harm the passenger?

The ultimate S&AS would be a system that can do all of above. It can sense its own capabilities and limits, have all the knowledge we have accumulated for thousands of years, do everything we tell it to do perfectly with ethical and law in consideration, and continue to evolve without our help.

An IPS could be autonomous vehicle. The autonomous vehicle can sense its surroundings, environment, its condition, and safely long-term self-driving. It could be a service robot that understands user’s demand from verbal commend, and perform requested tasks. It could be an unmanned aerial vehicle that will not fly into an airport or the White House. It could be a surgical robot that can make diagnoses, plan and carry out the surgery without a doctor involved. It could be a network of cameras that learn from its collected data so it could detect a threat, find suspects, solve crimes, and even prevent crimes from happening.

The program gives a very encouraging vision of the future and concrete research goals. Ten years ago, I predicted it would take 100 years to have robot to achieve human capability. Now, I think with the S&AS program, researchers in the US would start to move to that goal systematically. We will reach there much sooner than I have predicted.

The total amount of funding allocated by NSF each year is not much though. It is only 16.5 million USD and can support 25 to 40 projects. But I guess it could increase if it flourishes many high impact research outcomes. The small and more focused projects would have $350K to $700K and more integrative project would have $500K to 1.4 million.

Leave a Reply

Your email address will not be published. Required fields are marked *