WSJ Video: Japanese "Robot Hotel" Eliminates its Robots


This video shows the vision of hotel without human employees. But it did not work. 

The WSJ report on January 14, 2019. 

Turns out, robots aren’t the best at hospitality. After opening in a blaze of publicity in 2015, Japan’s Henn na, or “Strange,” Hotel, recognized by the Guinness Book of World Records as the world’s first robot hotel, is now laying off its low-performing droids. 

Please stop talking and let me sleep.

So far, the hotel has culled over half of its 243 robots, many because they created work rather than reduced it.

[...]

Mr. Sawada said he hasn’t given up on the idea of a hotel without human staff, but Strange Hotel has taught him that there are currently many jobs suited only for humans. “When you actually use robots you realize there are places where they aren’t needed—or just annoy people,” he said.


Full Story at WSJ.com. 

The Seven Deadly Sins of AI Predictions

This is a great article by Rodney Brooks, who is a former director of the Computer Science and Artificial Intelligence Laboratory at MIT and a founder of Rethink Robotics and iRobot. 

Excerpt: Similarly, we have seen a sudden increase in performance of AI systems thanks to the success of deep learning. Many people seem to think that means we will continue to see AI performance increase by equal multiples on a regular basis. But the deep-learning success was 30 years in the making, and it was an isolated event.

That does not mean there will not be more isolated events, where work from the backwaters of AI research suddenly fuels a rapid-step increase in the performance of many AI applications. But there is no “law” that says how often they will happen.

6. Hollywood scenarios

The plot for many Hollywood science fiction movies is that the world is just as it is today, except for one new twist.

In Bicentennial Man, Richard Martin, played by Sam Neill, sits down to breakfast and is waited upon by a walking, talking humanoid robot, played by Robin Williams. Richard picks up a newspaper to read over breakfast. A newspaper! Printed on paper. Not a tablet computer, not a podcast coming from an Amazon Echo–like device, not a direct neural connection to the Internet.

It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged. They ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences. They will evolve technologically over time, and our world will come to be populated by many other intelligences, and we will have lots of experience already. Long before there are evil super-intelligences that want to get rid of us, there will be somewhat less intelligent, less belligerent machines. Before that, there will be really grumpy machines. Before that, quite annoying machines. And before them, arrogant, unpleasant machines. We will change our world along the way, adjusting both the environment for new technologies and the new technologies themselves. I am not saying there may not be challenges. I am saying that they will not be sudden and unexpected, as many people think.

7. Speed of deployment

New versions of software are deployed very frequently in some industries. New features for platforms like Facebook are deployed almost hourly. For many new features, as long as they have passed integration testing, there is very little economic downside if a problem shows up in the field and the version needs to be pulled back. This is a tempo that Silicon Valley and Web software developers have gotten used to. It works because the marginal cost of newly deploying code is very, very close to zero.

Deploying new hardware, on the other hand, has significant marginal costs. We know that from our own lives. Many of the cars we are buying today, which are not self-driving, and mostly are not software-enabled, will probably still be on the road in the year 2040. This puts an inherent limit on how soon all our cars will be self-driving. If we build a new home today, we can expect that it might be around for over 100 years. The building I live in was built in 1904, and it is not nearly the oldest in my neighborhood.

Capital costs keep physical hardware around for a long time, even when there are high-tech aspects to it, and even when it has an existential mission.

The U.S. Air Force still flies the B-52H variant of the B-52 bomber. This version was introduced in 1961, making it 56 years old. The last one was built in 1962, a mere 55 years ago. Currently these planes are expected to keep flying until at least 2040, and perhaps longer — there is talk of extending their life to 100 years.

I regularly see decades-old equipment in factories around the world. I even see PCs running Windows 3.0 — a software version released in 1990. The thinking is “If it ain’t broke, don’t fix it.” Those PCs and their software have been running the same application doing the same task reliably for over two decades.

The principal control mechanism in factories, including brand-new ones in the U.S., Europe, Japan, Korea, and China, is based on programmable logic controllers, or PLCs. These were introduced in 1968 to replace electromechanical relays. The “coil” is still the principal abstraction unit used today, and PLCs are programmed as though they were a network of 24-volt electromechanical relays. Still. Some of the direct wires have been replaced by Ethernet cables. But they are not part of an open network. Instead they are individual cables, run point to point, physically embodying the control flow — the order in which steps get executed — in these brand-new ancient automation controllers. When you want to change information flow, or control flow, in most factories around the world, it takes weeks of consultants figuring out what is there, designing new reconfigurations, and then teams of tradespeople to rewire and reconfigure hardware. One of the major manufacturers of this equipment recently told me that they aim for three software upgrades every 20 years.

In principle, it could be done differently. In practice, it is not. I just looked on a jobs list, and even today, this very day, Tesla Motors is trying to hire PLC technicians at its factory in Fremont, California. They will use electromagnetic relay emulation in the production of the most AI-enhanced automobile that exists.

A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.

Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.

Read Full Article from MIT Technology Review

Interesting Essay on AI by the Ex-head of Google China

The Human Promise of the AI Revolution

Artificial intelligence will radically disrupt the world of work, but the right policy choices can make it a force for a more compassionate social contract.

Here are a couple of exerpts: 

"If handled with care and foresight, this AI crisis could present an opportunity for us to redirect our energy as a society to more human pursuits: to taking care of each other and our communities. To have any chance of forging that future, we must first understand the economic gauntlet that we are about to pass through.

Many techno-optimists and historians would argue that productivity gains from new technology almost always produce benefits throughout the economy, creating more jobs and prosperity than before. But not all inventions are created equal. Some changes replace one kind of labor (the calculator), and some disrupt a whole industry (the cotton gin). Then there are technological changes on a grander scale. These don’t merely affect one task or one industry but drive changes across hundreds of them. In the past three centuries, we’ve only really seen three such inventions: the steam engine, electrification and information technology.

[...]

AI’s main advantage over humans lies in its ability to detect incredibly subtle patterns within large quantities of data and to learn from them. While a human mortgage officer will look at only a few relatively crude measures when deciding whether to grant you a loan (your credit score, income and age), an AI algorithm will learn from thousands of lesser variables (what web browser you use, how often you buy groceries, etc.). Taken alone, the predictive power of each of these is minuscule, but added together, they yield a far more accurate prediction than the most discerning people are capable of.

For cognitive tasks, this ability to learn means that computers are no longer limited to simply carrying out a rote set of instructions written by humans. Instead, they can continuously learn from new data and perform better than their human programmers. For physical tasks, robots are no longer limited to repeating one set of actions (automation) but instead can chart new paths based on the visual and sensor data they take in (autonomy). 

Together, this allows AI to take over countless tasks across society: driving a car, diagnosing a disease or providing customer support. AI’s superhuman performance of these tasks will lead to massive increases in productivity. According to a June 2017 study by the consulting firm PwC, AI’s advance will generate $15.7 trillion in additional wealth for the world by 2030. This is great news for those with access to large amounts of capital and data. It’s very bad news for anyone who earns their living doing soon-to-be-replaced jobs."