2045 Initiativehttp://2045.com/Strategic Social Initiativehttp://2045.ru/images/logo_en.png2045 Initiativehttp://2045.com/<![CDATA[Dmitry Itskov: www.Immortal.me - Want to be immortal? Act!]]>http://2045.com/news/33999.html33999Fellow Immortalists!

Many of the daily letters that the 2045 Initiative and I receive ask the question: will only the very rich be able to afford an avatar in the future, or will they be relatively cheap and affordable for almost everyone?

I would like to answer this question once again: avatars will be cheap and affordable for many people,… but only if people themselves make every effort needed to achieve this, rather than wait until someone else does everything for them.

To facilitate and expedite this, I am hereby soft-launching a project today which will allow anyone to contribute to the creation of a ‘people’s avatar’… and perhaps even capitalize on this in the future. The project is named Electronic Immortality Corporation. It will soon be launched at http://www.immortal.me under the motto "Want to be immortal? Act!"

The Electronic Immortality Corporation will be a social network, operating under the rules of a commercial company. Instead of a user agreement, volunteers will get jobs and sign a virtual contract.

In addition to creating a ‘people’s avatar’, the Electronic Immortality Corporation will also implement various commercial and charitable projects aimed at realizing ideas of the 2045 Initiative, transhumanism and immortalism.

We will create future technologies that can be commercialized within decades (e.g. Avatar C) as well as implement ‘traditional’ business projects such as, for example, producing commercially viable movies.

Even the smallest volunteer contribution to the work of the Corporation will be rewarded by means of its own virtual currency that will be emitted for two purposes only: a) to reward volunteer work, and b) to compensate real financial investments in the company. Who knows, our virtual currency may well become as popular and in demand as Bitcoin.

The first steps are as follows:

First, we will establish an expert group, which will shape the final concept and the statutes of the Electronic Immortality Corporation.

Second, we will announce and organize two competitions: a) to create the corporate identity of the Electronic Immortality Corporation, and b) the code of the social network.

Third, we will form the Board of Directors of the Electronic Immortality Corporation.  There, we would like to see experienced businessmen with a track record of successfully implemented large projects.

Fourth, we will engage celebrities and public figures from around the world.

Therefore, if you…

- have experience in creating social networks, online games, gaming communities and are willing to discuss the final concept of the Electronic Immortality Corporation,

- are a brilliant designer,

- are a talented programmer with experience in developing large-scale and/or open source projects,

- are a businessman with experience in managing large companies and ready to participate in the Board of Directors of the Electronic Immortality Corporation or you know of such a person,

- are in contact with celebrities and ready to engage them in the Electronic Immortality Corporation;

and at the same time you desire to change the world, to build a high-tech reality, to participate in creating avatars and immortality technologies… if all of this is your dream and you are ready to serve it selflessly,

email us at team@immortal.me

Want to be immortal? Act!

 

Dmitry Itskov

Founder of the 2045 Initiative



]]>
Sun, 23 Apr 2045 21:50:23 +0000
<![CDATA[Artificial intelligence is going to completely change your life]]>http://2045.com/news/35208.html35208Just as electricity transformed the way industries functioned in the past century, artificial intelligence — the science of programming cognitive abilities into machines — has the power to substantially change society in the next 100 years. AI is being harnessed to enable such things as home robots, robo-taxis and mental health chatbots to make you feel better. A startup is developing robots with AI that brings them closer to human level intelligence. Already, AI has been embedding itself in daily life — such as powering the brains of digital assistants Siri and Alexa. It lets consumers shop and search online more accurately and efficiently, among other tasks that people take for granted.

“AI is the new electricity,” said Andrew Ng, co-founder of Coursera and an adjunct Stanford professor who founded the Google Brain Deep Learning Project, in a keynote speech at the AI Frontiers conference that was held this past weekend in Silicon Valley. “About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform” every major sector in coming years. And even though there’s a perception that AI was a fairly new development, it has actually been around for decades, he said. But it is taking off now because of the ability to scale data and computation.

Ng said most of the value created through AI today has been through supervised learning, in which an input of X leads to Y. But there have been two major waves of progress: One wave leverages deep learning to enable such things as predicting whether a consumer will click on an online ad after the algorithm gets some information about him. The second wave came when the output no longer has to be a number or integer but things like speech recognition, a sentence structure in another language or audio. For example, in self-driving cars, the input of an image can lead to an output of the positions of other cars on the road.

Indeed, deep learning — where a computer learns from datasets to perform functions, instead of just executing specific tasks it was programmed to do — was instrumental in achieving human parity in speech recognition, said Xuedong Huang, who led the team at Microsoft on the historic achievement in 2016 when their system booked a 5.9% error rate, the same as a human transcriptionist. “Thanks to deep learning, we were able to reach human parity after 20 years,” he said at the conference. The team has since lowered the error rate even more, to 5.1%.

The Rise of Digital Assistants

Starting in 2010, the quality of speech recognition began to improve for the industry, eventually leading to the creation of Siri and Alexa. “Now, you almost take it for granted,” Ng said. That’s not all; speech is expected to replace touch-typing for input, said Ruhi Sarikaya, director of Amazon Alexa. The key to greater accuracy is to understand the context. For example, if a person asks Alexa what he should do for dinner, the digital assistant has to assess his intent. Is he asking Alexa to make a restaurant reservation, order food or find a recipe? If he asks Alexa to find ‘Hunger Games,’ does he want the music, video or audiobook?

And what’s next for the digital assistant is an even more advanced undertaking — to understand “meaning beyond words,” said Dilek Hakkani-Tur, a research scientist at Google. For example, if the user uses the words “later today,” it could mean 7 p.m. to 9 p.m. for dinner or 3 p.m. to 5 p.m. for meetings. This next level up also calls for more complex and lively conversations, multi-domain tasks and interactions beyond domain boundaries, she said. Moreover, Hakkani-Tur said, digital assistants should be able to do things such as easily read and summarize emails.

After the speech, ‘computer vision’ — or the ability of computers to recognize images and categorize them — was the next to leap, speakers said. With many people uploading images and video, it became cumbersome to add metadata to all content as a way to categorize them. Facebook built an AI to understand and categorize videos at scale called Lumos, said Manohar Paluri, a research lead at the company. Facebook uses Lumos to do data collection of, for example, fireworks images and videos. The platform can also use people’s poses to identify a video, such as categorizing people lounging around on couches as hanging out.

What’s critical is to ascertain the primary semantic content of the uploaded video, added Rahul Sukthankar, head of video understanding at Google. And to help the computer correctly identify what’s in the video — for example, whether professionals or amateurs are dancing — his team mines YouTube for similar content that AI can learn from, such as having a certain frame rate for non-professional content. Sukthankar adds that a promising direction for future research is to do computer training using videos. So if a robot is shown a video of a person pouring cereal into a bowl at multiple angles, it should learn by watching.

At Alibaba, AI is used to boost sales. For example, shoppers of its Taobao e-commerce site can upload a picture of a product they would like to buy, like a trendy handbag sported by a stranger on the street, and the website will come up with handbags for sale that come closest to the photo. Alibaba also uses augmented reality/virtual reality to make people see and shop from stores like Costco. On its Youku video site, which is similar to YouTube, Alibaba is working on a way to insert virtual 3D objects into people’s uploaded videos, as a way to increase revenue. That’s because many video sites struggle with profitability. “YouTube still loses money,” said Xiaofeng Ren, a chief scientist at Alibaba.

Rosie and the Home Robot

But with all the advances in AI, it’s still no match for the human brain. Vicarious is a startup that aims to close the gap by developing human level intelligence in robots. Co-founder Dileep George said that the components are there for smarter robots. “We have cheap motors, sensors, batteries, plastics and processors … why don’t we have Rosie?” He was referring to the multipurpose robot maid in the 1960s space-age cartoon The Jetsons. George said the current level of AI is like what he calls the “old brain,” similar to the cognitive ability of rats. The “new brain” is more developed such as what’s seen in primates and whales.

George said the “old brain” AI gets confused when small inputs are changed. For example, a robot that can play a video game goes awry when the colors are made just 2% brighter. “AI today is not ready,” he said. Vicarious uses deep learning to get the robot closer to human cognitive ability. In the same test, a robot with Vicarious’s AI kept playing the game even though the brightness had changed. Another thing that confuses “old brain” AI is putting two objects together. People can see two things superimposed on each other, such as a coffee mug partly obscuring a vase in a photo, but robots mistake it for one unidentified object. Vicarious, which counts Facebook CEO Mark Zuckerberg as an investor, aims to solve such problems.

The intelligence inside Kuri, a robot companion and videographer meant for the home, is different. Kaijen Hsiao, chief technology officer of creator Mayfield Robotics, said there is a camera behind the robot’s left eye that gathers video in HD. Kuri has depth sensors to map the home and uses images to improve navigation. She also has pet and person detection features so she can smile or react when they are around. Kuri has place recognition as well, so she will remember she has been to a place before even if the lighting has changed, such as the kitchen during the day or night. Moment selection is another feature of the robot, which lets her recognize similar videos she records — such as dad playing with the baby in the living room — and eliminates redundant ones.

“Her job is to bring a spot of life to your home. She provides entertainment — she can play music, podcasts, audiobooks. You can check your home from anywhere,” Hsiao said. Kuri is the family’s videographer, going around the house recording so no one is left out. The robot will curate the videos and show the best ones. For this, Kuri uses vision and deep learning algorithms. “Her point is her personality … [as] an adorable companion,” Hsiao said. Kuri will hit the market in December at $799.

“About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform” every major sector in coming years.–Andrew Ng

Business Response to AI

The U.S. and China lead the world in investments in AI, according to James Manyika, chairman and director of the McKinsey Global Institute. Last year, AI investment in North America ranged from $15 billion to $23 billion, Asia (mainly China) was $8 billion to $12 billion, and Europe lagged at $3 billion to $4 billion. Tech giants are the primary investors in AI, pouring in between $20 billion and $30 billion, with another $6 billion to $9 billion from others, such as venture capitalists and private equity firms.

Where did they put their money? Machine learning took 56% of the investments with computer vision second at 28%. Natural language garnered 7%, autonomous vehicles was at 6% and virtual assistants made up the rest. But despite the level of investment, actual business adoption of AI remains limited, even among firms that know its capabilities, Manyika said. Around 40% of firms are thinking about it, 40% experiment with it and only 20% actually adopt AI in a few areas.

The reason for such reticence is that 41% of companies surveyed are not convinced they can see a return on their investment, 30% said the business case isn’t quite there and the rest said they don’t have the skills to handle AI. However, McKinsey believes that AI can more than double the impact of other analytics and has the potential to materially raise corporate performance.

There are companies that get it. Among sectors leading in AI are telecom and tech companies, financial institutions and automakers. Manyika said these early adopters tend to be larger and digitally mature companies that incorporate AI into core activities, focus on growth and innovation over cost savings and enjoy the support of C-suite level executives. The slowest adopters are companies in health care, travel, professional services, education and construction. However, as AI becomes widespread, it’s a matter of time before firms get on board, experts said.

]]>
Tue, 20 Mar 2018 12:01:13 +0000
<![CDATA[$10 million XPRIZE Aims for Robot Avatars That Let You See, Hear, and Feel by 2021]]>http://2045.com/news/35210.html35210Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE

]]>
Mon, 19 Mar 2018 12:09:15 +0000
<![CDATA[ IBM Highlights 5 Technologies It Hopes To Pioneer In 5 Years]]>http://2045.com/news/35207.html35207We tend to think of innovation as being about ideas. A lone genius working in a secret lab somewhere screams "Eureka!" and the world is instantly changed. But that's not how the real world works. In truth, innovation is about solving problems and it starts with identifying useful problems to solve.

It is with that in mind that IBM comes out with its annual list of five technologies that it expects to impact the world in five years. Clearly, each year's list is somewhat speculative, but it also gives us a look at the problems that the company considers to be important and that its scientists are actively working on solving.

This year's list focuses on two aspects of digital technology that are particularly important for businesses today. The first is how we can use digital technology to provide a greater impact on the physical world in which we all live and work. The second, which is becoming increasingly crucial, is how we can make those technologies more secure.

1. AI Powered Microscopes At Nanoscale

In the late 17th century a middle-aged draper named Antonie van Leeuwenhoek became interested in the magnifying glasses he used to inspect fabric. From those humble beginnings arose the new age of microscopy which has helped produce countless major discoveries over the last 450 years.

Today, IBM hopes to spur a similar revolution with nanoscale microscopes powered by AI. Unlike Leeuwenhoek's version, these will not use optical lenses, but optical chips like the ones in your cell phone, except shrunk down small enough to observe microscopic cells in their natural environment and use AI to analyze and interpret what it sees.

As Simone Bianco and Tom Zimmerman, both researchers at the company, explained in their popular TED Talk, these devices can help us to better understand how plankton in the world's oceans behave in reaction to stimuli and help mitigate the effects of global warming.

It is also partnering with the National Science Foundation (NSF) to transform our body's cells into microscopic sensors. With a greater understanding of what's going on in both normal and abnormal conditions, scientists will be able to better diagnose disease and come up with new cures. It may even help power science for the next 450 years.

2. Combining Crypto-Anchors And Blockchain To Secure The World's Supply Chains

In 1999, a young assistant brand manager at Procter and Gamble named Kevin Ashtonrealized that an obscure technology that used radio waves to power small, passive devices could revolutionize supply chain management. Today, RFID chips are everywhere, helping us to track and manage inventory across the globe.

However, although RFID helps to increase efficiency, it can do little about security, which is has become a massive problem for two reasons. First, counterfeiting costs businesses hundreds of billions dollars a year and helps finance criminal gangs and terrorists. Second, in the case of things like food and medicine, insecure supply chains are a major health hazard.

IBM sees a solution to the problem of counterfeit goods through combining tamper-proof digital fingerprints it calls "crypto-anchors" with blockchain technology to secure supply chains at a cost low enough to spur wide adoption. It is also unveiling the world's smallest computer this week. Costing less than 10 cents to make and smaller than a grain of salt, it can be used to analyze products such as wines and medicine and verify provenance.

As a first step to securing supply chains, the company has formed a joint venture with the global logistics firm Maersk to implement blockchain technology throughout the world. For businesses, this will mean a supply chain that is more efficient, reliable and secure.

3. Super-Secure Lattice Cryptography

2017 was a great year for cyber attackers, but not so good for the the rest of us. Major breaches at Equifax, Uber and in a database containing over 200 million voter records were just the highlights of a banner year for hackers. These attacks highlight a critical vulnerability for both our financial systems and the integrity of our democracy.

Part of the problem is that conventional cryptography methods are designed to be incredibly cumbersome -- even for supercomputers -- so information needs to be decrypted in order to be analyzed. When hackers get into a system, they can often take whatever they want.

IBM is working on a form of security called lattice-based cryptography. Unlike traditional methods, which use impossibly large prime numbers as a key, these use complex algebraic problems called "lattices" to secure information -- even from quantum computers many years from now. A related technology, called Fully Homomorphic Encryption (FHE)will allow systems to analyze data without decryption.

So, for example, a hacker breaking into a customer or voter database will free to calculate how much tax is owed on a purchase or how many Millennials reliably vote for Democrats, but identities will remain secret. Businesses may also be able to analyze data they never could before, because they won't actually need to be given decrypted access.

4. Rooting Out Data Bias For Reliable And Trustworthy Artificial Intelligence

As Cathy O'Neil explains in Weapons of Math Destruction data bias has become a massive problem. One famous example of this kind of bias is Microsoft Tay, an AI powered agent that was let loose on Twitter. Exposed to Internet trolls, it was transformed from a friendly and casual bot ("humans are super cool") to downright scary, ("Hitler was right and I hate Jews").

Even more serious are the real world impacts of data bias. Today's algorithms often determine what college we attend, if we get hired for a job and even who goes to prison and for how long. However, these systems are often "black boxes" whose judgments are rarely questioned. They just show up on a computer screen and fates are determined.

IBM is now working with MIT to embed human values and principles in autonomic decision-making and to devise methods to test, audit and prevent bias in the data that AI systems use to augment human judgments. With numerous questions being raised about the ethics of AI, the ability to oversee the algorithms that affect our lives is becoming essential.

5. Quantum Computing Goes Mainstream

Quantum computing is a technology that IBM has been working on for decades. Still, until relatively recently, it was mostly a science project, with little practical value or obvious commercial applications. Today, however, the field is advancing quickly and many firms, including Google, Microsoft and Intel, are investing heavily into the technology.

Over the next five years the company sees quantum computing becoming a mainstream technology and has unveiled several initiatives to help make that happen.

  • Q Experience is a real working quantum computer that anyone who wants to can access through the cloud and learn to work with the new technology
  • QISkit, is a set of tools that helps people program quantum computers using the popular Python language
  • Q Network a group of organizations exploring practical application of quantum computers.

The effort to design new computing architectures, which includes neuromorphic chips as well as quantum computers, is becoming increasingly important as Moore's law winds down and theoretical limits soon make further advances in transistor-based computers impossible.

Like the other initiatives in IBM's 5 for 5, taking quantum computer mainstream within five years stretches the bounds of the possible, but that's very much the point. Identifying a meaningful problem and setting a goal to solve it are the first steps in transforming an idea into reality.

]]>
Mon, 19 Mar 2018 11:59:07 +0000
<![CDATA[With a spacecraft in trouble and the White House watching, SpaceX had to deliver]]>http://2045.com/news/35209.html35209Within minutes of liftoff, it was clear the Dragon spacecraft was in trouble.

Inside mission control on the morning of March 1, 2013, the SpaceX team was desperately trying to figure out what went wrong and soon pinpointed the problem: A few valves were stuck.

Lori Garver, NASA’s deputy administrator, was beside herself. The Obama administration had placed a bold bet on Elon Musk’s SpaceX, awarding it hundreds of millions of dollars on contracts to fly crew — not just cargo — to the International Space Station, despite the critics who said it was foolish to trust a private outfit with such a complicated endeavor.

This was a fundamental shift for NASA, a move that some in the agency’s highest reaches were wary of, and a risky bet by the White House. Under President Barack Obama, NASA continued plans to retire the space shuttle and hired contractors — SpaceX and Boeing — to fly astronauts to the International Space Station as if they were providing a taxi service to space. That, in turn, would allow NASA to focus on missions in deep space and recapture some of the glory that had faded in the decades since the Apollo era put 12 men on the moon.

In a 2010 speech at the Kennedy Space Center, Obama acknowledged the risks of the proposal. “Now, I recognize that some have said it is unfeasible or unwise to work with the private sector in this way,” he said. But the U.S. space program needed a kick-start, he said: “We will also accelerate the pace of innovations as companies — from young start-ups to established leaders — compete to design and build and launch new means of carrying people and materials out of our atmosphere.”

Musk had become the face of this new policy, the brash Silicon Valley tech billionaire who founded SpaceX in 2002 with the goal of colonizing Mars. At first, the company had struggled, and it nearly went out of business after three failed attempts to reach orbit. But after a successful launch in 2008, SpaceX won a $1.6 billion NASA cargo contract, prompting Musk to change his log-in password to “ILoveNASA.”

Now, he had to show that his Hawthorne, Calif., upstart could deliver. And with Dragon struggling in 2013, he was starting to sweat.

Musk wasn’t the only one with a lot to lose. If the spacecraft didn’t dock with the station, if the mission somehow failed, Garver feared the critics would again blast Obama’s decision.

Many were already upset with the White House for canceling the President George W. Bush-era Constellation program — a plan to return to the moon using big rockets and spacecraft built by the traditional industrial base led by Boeing and Lockheed Martin, a plan that SpaceX was fighting to disrupt.

Sen. Richard C. Shelby (R-Ala.), a powerful member of the Appropriations Committee, said at the time that Obama’s plan “begins the death march for the future of U.S. human spaceflight” and that Obama was turning NASA into the agency of “pipe dreams and fairy tales.”

Obama’s shift also drew criticism from Michael Griffin, a former NASA administrator.

“One day it will be like commercial airline travel, just not yet,” he said of space flight. “It’s like 1920. Lindbergh hasn’t flown the Atlantic, and they’re trying to sell 747s to Pan Am.”

To assuage concerns, the White House decided Obama would visit the United Launch Alliance, the joint venture between Lockheed Martin and Boeing. The message was clear: Although the president canceled one of their major programs, the contractors were still a vital part of the U.S. space program. His presence there would be an endorsement and a signal to Congress.

But there was a problem: The Alliance was about to launch a highly classified spaceplane known as the X-37B that would ultimately stay in orbit for months at a time. But doing what? The Pentagon wouldn’t say. The program was secret, which was why the president couldn’t just swing by for a photo op. The National Security Council wouldn’t hear of it.

So the White House scrambled. Instead, the president would visit SpaceX, a development that the company welcomed. A presidential visit would represent a public relations triumph over its arch­rival, even if it was, as Musk said later, “a sheer accident.”

An accident that had raised the stakes even further — for Musk and the White House. This was SpaceX’s second official cargo delivery flight to the space station. It had to work, thought Garver, the NASA deputy administrator. They had to find a way to rescue Dragon, and fast.

But as they tried to figure out what was wrong, Steve Davis, SpaceX’s director of advanced projects, had begun to prepare for the worst — aborting the mission.

“Is the vehicle even functioning enough that you can bring it back?” he wondered. “We weren’t sure. That was the only time we had ever planned for an emergency reentry, which is like a big thing because you have to whip it through airspace. You have to reroute planes in real time. It’s not awesome. And so we were in panic mode.”

In late 2010, on the eve of the Falcon 9’s second launch and the first test flight of the Dragon spacecraft, a last-minute inspection of the rocket revealed a crack in the nozzle. (David Hash/SpaceX)Be scrappy or die

SpaceX had been in panic mode before. In late 2010, on the eve of the Falcon 9’s second launch and the first test flight of the Dragon spacecraft, a last-minute inspection of the rocket revealed a crack in the nozzle, or skirt, of the second-stage engine.

“You’re not going to fly with a crack,” Davis said. “We’re like, ‘What do we do?’ ”

The normal thing would be to take the rocket apart, replace the engine skirt, reinspect it. And then “you’re up and launching in a month,” he said. No one wanted to lose that much time.

Musk had a wild idea: “What if we just cut the skirt? Like, literally cut around it?” That is, what if they trimmed off the bottom as if it were a fingernail?

“He went person by person and said, ‘Would this have any adverse effect on you?’ ” Davis recalled.

Davis said that because the skirt would be shorter, they would get less performance from the engine. “But we had so much margin built into it, it didn’t matter,” he said. Everyone concurred, and “literally within 30 minutes, the decision was made.”

The company flew a technician from California to Cape Canaveral; armed with a pair of shears, like the kind used to trim hedges, he cut around the crack.

“And we flew the next day successfully,” Davis said. “That could have been the dumbest thing we ever did, but it was amazing.”

That was not how NASA would have handled it. But its officials agreed that there wasn’t any reason it wouldn’t work and approved the launch, astounded by how quickly SpaceX was addressing the problem.

That sort of go-for-it ethos had become a SpaceX trademark. Gwynne Shotwell, the company’s president, described the culture this way: “Head down. Plow through the line.”

Musk had an obsession with a relentless focus on the mission that included a standing rule for his employees: If they ever found themselves in a meeting that was a waste of time, they had his permission to get up and leave. No questions asked.

“We had to be super scrappy,” Musk said. “If we did it the standard way, we would have run out of money. For many years, we were week to week on cash flow, within weeks of running out of money. It definitely creates a mind-set of smart spending. Be scrappy or die: Those were our two options. Buy scrap components, fix them up, make them work.”

So when the company was rebuilding launchpad 40 at the Cape Canaveral Air Force Station, one of SpaceX’s employees spotted a 125,000-gallon liquid nitrogen tank and thought, “Maybe we could use this?”

Despite sitting outside for years, the tank seemed in decent shape, and the SpaceX’s 10-member team on the Cape wanted it. They called the Air Force asking permission, but their calls went unreturned. They persisted and were put in touch with a company that been hired to haul away the tank and destroy it.

The company was willing to part with the tank for $1 more than the cost of scrapping it — $86,000. So the members of the SpaceX team became the scavengers of Cape Canaveral, looking for leftover hardware as if they were on a treasure hunt. Old rail cars from the 1960s that had been used to ferry helium between New Orleans and Cape Canaveral became the new storage tanks. Instead of spending $75,000 on new air-conditioning chillers for the ground equipment building, someone found a deal on eBay for $10,000.

Cost drove lots of decisions, even how the company built its rockets. Once Musk got wind that the air conditioning system used to keep the satellite cool in the rocket’s fairing, or nose cone, was going to cost more than $3 million, he confronted the designer about it.

“What’s the volume in the fairing?” he wanted to know. The answer: less than that of a house.

He turned to Shotwell and asked her how much a new air-conditioning system for a house cost.

“We just changed our air-conditioning,” she replied. “It was six thousand bucks.”

“Why is this $3 or $4 million when your air conditioning system is $6,000?” he asked. “Go back and figure this out.”

The company did, buying six commercial A/C units with bigger pumps that could handle a larger airflow.

Employees walk beneath a Dragon spacecraft hanging above the factory floor at SpaceX. (Ricky Carioti/The Washington Post)Pressure treatment

Now, as Dragon was in trouble with the stuck valves, SpaceX had to figure out on the fly how to make it work.

As the SpaceX team scrambled, Bill Gerstenmaier, NASA’s associate administrator for human exploration and operations, and Michael Suffredini, the space station’s program office manager, were in the room watching.

They were two of the agency’s most senior officials, with nearly 60 years at NASA between them. They had served through the Challenger and Columbia disasters, had seen all sorts of problems in space, and now, as NASA faced another potential crisis, they were talking softly between themselves.

Fearing the political fallout of a failed mission, Garver, the deputy NASA administrator, wanted them to take over, to swoop in and save SpaceX.

There were no better people to come fix this. But the two NASA elder statesmen just watched, offering a bit of advice, a whisper here, a suggestion there, to the SpaceX crew. Mostly, they stayed out of the way.

“They were like grandparents,” Garver recalled. “And it was almost like grandpa taking them fishing: ‘Try over there. There might be some fish over there.’ ” A soft touch designed to let the kids learn to fish on their own, rather than an impatient dad grabbing the pole and catching the fish for them.

“If there was something we saw that we could have interjected, we would have done it,” Gerstenmaier recalled. But it wasn’t NASA’s spacecraft.

“We really were in an advisory role,” Suffredini said.

As they watched, the people in the control room worked the problem. The valves were stuck, so they’d need something to make it unstuck. On a spacecraft circling the globe at 17,500 mph, that was no easy task. But the SpaceX team knew that if pressure could be built ahead of the valves and then suddenly released, it might just deliver the kick needed to jar them open.

“It’s like the spacecraft equivalent of the Heimlich maneuver,” Musk said later.

One of the engineers typed up a command, right then, on the fly, programming the spacecraft to build up the pressure. Then, they tried to beam the new command up to the Dragon, as if it were an iPhone update. At that moment, the NASA elders knew they were witnessing something special. It wasn’t that they had fixed a problem with the spacecraft; that happened all the time. It was how fast they did it.

“The SpaceX mind-set had always been about adapting quickly, and it really shined that day,” Suffredini said. “They had really an in-depth understanding of that system and the software, and that’s one of the secrets of their success. They probably had the kid in there who wrote the original code.”

But the SpaceX crew was having a hard time communicating with the spacecraft. The code wouldn’t transmit. So someone got the Air Force on the phone, which gave the company access to a more powerful satellite dish, which allowed, at last, the uplink.

The code worked. The valves opened. The mission was a success. 

This article is adapted from the forthcoming book “The Space Barons: Elon Musk, Jeffrey P. Bezos and the Quest to Colonize the Cosmos.”

]]>
Fri, 16 Mar 2018 12:06:33 +0000
<![CDATA[This WALK-MAN robot can go places too dangerous for humans]]>http://2045.com/news/35211.html35211Humanoid robots may not be ready to take your job. But that doesn’t mean they can’t be useful.

Researchers at the Italian Institute of Technology built a robot they called WALK-MAN that they say is uniquely capable of assisting in emergency situations. Humans control the machines using a virtual reality headset and a tracking suit that’s designed so the robot mirrors their movements. That way, the robot can go places too dangerous for humans, while the humans operate the machine from a safe distance.

There’s still significant work to be done: the robot is fairly slow, so it’s not well-suited for time-sensitive work like saving people from an unstable building. But watch the video above to see some of the work the robot can do, and get a sense of where the research is headed.

]]>
Mon, 12 Mar 2018 12:11:53 +0000
<![CDATA[Google's ​quantum computing breakthrough: Our new chip might soon outperform a supercomputer]]>http://2045.com/news/35206.html35206Google's Quantum AI Lab has shown off a new 72-qubit quantum processor called 'Bristlecone', which it says could soon achieve 'quantum supremacy' by outperforming a classical supercomputer on some problems.

Quantum supremacy is a key milestone on the journey towards quantum computing. The idea is that if a quantum processor can be operated with low enough error rates, it could outperform a classical supercomputer on a well-defined computer science problem.

Quantum computers are an area of huge interest because, if they can be built at a large enough scale, they could rapidly solve problems that cannot be handled by traditional computers. That's why the biggest names in tech are racing ahead with quantum computing projects: in January Intel announced its own 49-qubit quantum chip, for example.

"We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone," said Julian Kelly, a research scientist at the Quantum AI Lab.

"We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives," Kelly said -- although he did not offer a timescale for this achievement.

If a quantum processor is to run algorithms beyond the scope of classical simulations, a large number of qubits are required, along with low error rates on readout and logical operations, such as single and two-qubit gates.

As technology changes and companies adapt, IT hiring priorities will evolve as well. In September and October 2017, Tech Pro Research conducted a survey to find out if developments in areas like artificial intelligence, big data, and cloud have changed...

Although researchers have yet to achieve quantum supremacy, Google thinks it can be demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5 percent.

Google said its new 72-qubit Bristlecone device uses the same scheme for coupling, control, and readout as its previous 9-qubit linear array. With the new processor, researchers are looking to achieve similar performance to the best error rates of the 9-qubit device, but now across all 72 qubits of Bristlecone. The 9-qubit device demonstrated low error rates for readout (one percent), single-qubit gates (0.1 percent) and most importantly two-qubit gates (0.6 percent) as its best result.

"We believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers," Kelly said.

However he added: "Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself. Getting this right requires careful systems engineering over several iterations."

]]>
Fri, 9 Mar 2018 17:56:45 +0000
<![CDATA[It's Official: Elon Musk Will Send Humans to Mars in 2024]]>http://2045.com/news/35204.html35204On the last day of the International Astronautical Congress in Adelaide, Australia, SpaceX CEO Elon Musk took the stage to talk about his company’s BFR project. Moreover, sharing details on how the technology might be utilized to transform long-distance travel on Earth, Musk also clarified how it could aid our off-world activities.
The basic concept behind the BFR is to make a single booster and ship that could substitute the company’s Falcon 9, Falcon Heavy, and Dragon. This would let SpaceX to pour all the assets at present divided across those three crafts into the one project.

Once finished, the BFR could be utilized to launch satellites and space telescopes or clear space debris. It would also be able to dock with the International Space Station (ISS) for the supply of cargo. Most extraordinarily, however, is the BFR’s potential to assist the establishment of off-world colonies.

Mission to Mars

The present BFR design is big enough to ship up to 100 people and sufficient equipment, which Musk thinks will be instrumental in making a base of operations on the Moon.
“It’s 2017, I mean, we should have a lunar base by now,” he said during his IAC presentation. “What the hell is going on?”
Musk’s ambitions go well beyond the Moon, however. SpaceX’s goal of journey to Mars as soon as they have the resources to do so is well known, and in last night’s presentation, Musk shared images of a fully-fledged Martian city. Construction on SpaceX’s first ship able to head to Mars is projected to start within the next nine months, and Musk hopes to send a pair of cargo ships to the planet in 2022, however he confessed that this goal is somewhat “aspirational.”

In 2024, SpaceX would send astronauts to the Red Planet on-board two crewed BFRs. These first “settlers” would make a fuel plant that would work as the start of the Martian colony. After that, the plan is to construct multiple landing pads, then enlarge out into terraforming and the construction of an urban environment.
Musk’s goals are certainly daring. Though, putting humans on Mars will take some big, bold ideas, and his certainly qualify.

]]>
Thu, 8 Mar 2018 17:41:43 +0000
<![CDATA[Death-Bringing 'Brain Tsunamis' Have Been Observed in Humans For The First Time]]>http://2045.com/news/35205.html35205For the first time, researchers have been able to study the moment brain death becomes irreversible in the human body, observing the phenomenon in several Do Not Resuscitate patients as they died in hospital.

For years, scientists have researched what happens to your brain when you die, but despite everything we've found out, progress has been stymied by an inability to easily monitor human death – since physicians are conventionally obliged to prevent death if they can, not monitor it as it takes hold.

What this means is most of our understanding of the processes involved in brain death come from animal experiments, strengthened with what we can glean from the accounts of resuscitated patients disclosing their near-death experiences.

Now, an international team of scientists looks to have made a breakthrough.

In animals, within 20 to 40 seconds of oxygen deprivation, the brain enters an 'energy-saving mode' where it becomes electrically inactive and neurons cease communicating with one another.

After a few minutes, the brain begins to break down as ion gradients in cells dissipate, and a wave of electrochemical energy – called a spreading depolarisation (or 'brain tsunami') spreads throughout the cortex and other brain regions, ultimately causing irreversible brain damage.

But a team led by neurologist Jens Dreier from Universitätsmedizin Berlin in Germany - who monitored these processes taking place in nine patients with devastating brain injuries (under Do Not Resuscitate – Comfort Care orders) – say the tsunami of brain death may actually be capable of being stopped.

"After circulatory arrest, spreading depolarisation marks the loss of stored electrochemical energy in brain cells and the onset of toxic processes that eventually lead to death," Dreier explains.

"Importantly, it is reversible – up to a point – when the circulation is restored."

Using neuro-monitoring technology called subdural electrode strips and intraparenchymal electrode arrays, the researchers monitored spreading depolarisation in the patients' brains, and they suggest it's not a one-way wave – as long as circulation (and thus oxygen supply) can be resumed to the brain.

"Anoxia-triggered [spreading depolarisation] is fully reversible without any signs of cellular damage, if the oxidative substrate supply is re-established before the so-called commitment point, defined as the time when neurons start dying under persistent depolarisation," the authors explain in their paper.

For patients at risk of brain damage or death incurred through cerebral ischemiaor other kinds of stroke, the findings could one day be a life-saver, although the researchers explain a lot more work is needed before physicians will be able to take advantage of these discoveries.

"There are no direct implications for patient care today," Dreier sayspointing outmore observations will be essential to understand what's really going on here.

"Knowledge of the processes involved in spreading depolarisation is fundamental to the development of additional treatment strategies aimed at prolonging the survival of nerve cells when brain perfusion is disrupted."

The findings are reported in Annals of Neurology.

]]>
Fri, 2 Mar 2018 17:42:57 +0000
<![CDATA[This New Lightweight Humanoid Robot Can Put Out Fires And Pick Up Debris]]>http://2045.com/news/35203.html35203Researchers at IIT-Istituto Italiano di Tecnologia in Genova, Italy tested a new robotic avatar they say could be used for emergency response teams in the future. 

The robotic avatar, called the Walk-Man robot, is a lighter version of the original robot which launched in 2015 as part of the DARPA robotics challenge. The robot is controlled by a human operator remotely through a virtual interface. As it operates, the robot collects images and transmits these back to the emergency teams who can assess the situation and guide the robot remotely to the most critical areas. 

During the testing scenario in the IIT laboratories, the Walk-Man navigated through damaged rooms and performed four tasks including opening and walking through the door to enter the disaster zone, locating the control valve to close off the gas leak; removing debris in its path; and identifying and putting out fires with a fire extinguisher.

The robot has 32 engines and control boards, four force and torque sensors on its hands and feet and two accelerometers controlling its balance. Walk-Man is equipped with cameras, a 3D laser scanner, and microphone sensors and has the option to be equipped with chemical sensors, depending on usage requirements. 

Second generation humanoid robot Walk-Man is a robotic avatar designed to support emergency response teams.

Walk-Man is a little more than six feet tall (1.85 meters) and made of a variety of lightweight metals including Ergal, titanium, iron, magnesium alloys and plastic and weighs around 224 pounds (102 kilos).

Researchers say the lighter weight allows the robot to move faster and the lighter upper body enables it to react faster and maintain its balance in rough and uneven terrain. The lighter body also consumes less energy and operate for about two hours on a smaller 1 kWh battery and enables it to carry heavier objects for more extended periods of time. In the testing scenario, the robot could carry an object for 10 minutes. 

]]>
Mon, 26 Feb 2018 09:27:14 +0000
<![CDATA[Toyota Unveils Third Generation Humanoid Robot T-HR3]]>http://2045.com/news/35198.html35198Toyota City, Japan, November 21, 2017―Toyota Motor Corporation (Toyota) today revealed T-HR3, the company's third generation humanoid robot. Toyota's latest robotics platform, designed and developed by Toyota's Partner Robot Division, will explore new technologies for safely managing physical interactions between robots and their surroundings, as well as a new remote maneuvering system that mirrors user movements to the robot.

T-HR3 reflects Toyota's broad-based exploration of how advanced technologies can help to meet people's unique mobility needs. T-HR3 represents an evolution from previous generation instrument-playing humanoid robots, which were created to test the precise positioning of joints and pre-programmed movements, to a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.

"The Partner Robot team members are committed to using the technology in T-HR3 to develop friendly and helpful robots that coexist with humans and assist them in their daily lives. Looking ahead, the core technologies developed for this platform will help inform and advance future development of robots to provide ever-better mobility for all," said Akifumi Tamaoki, General Manager, Partner Robot Division.

T-HR3 is controlled from a Master Maneuvering System that allows the entire body of the robot to be operated instinctively with wearable controls that map hand, arm and foot movements to the robot, and a head-mounted display that allows the user to see from the robot's perspective. The system's master arms give the operator full range of motion of the robot's corresponding joints and the master foot allows the operator to walk in place in the chair to move the robot forward or laterally. The Self-interference Prevention Technology embedded in T-HR3 operates automatically to ensure the robot and user do not disrupt each other's movements.

Onboard T-HR3 and the Master Maneuvering System, motors, reduction gears and torque sensors (collectively called Torque Servo Modules) are connected to each joint. These modules communicate the operator's movements directly to T-HR3's 29 body parts and the Master Maneuvering System's 16 master control systems for a smooth, synchronized user experience. The Torque Servo Module has been developed in collaboration with Tamagawa Seiki Co., Ltd. and NIDEC COPAL ELECTRONICS CORP. This technology advances Toyota's research into safe robotics by measuring the force exerted by and on T-HR3 as it interacts with its environment and then conveying that information to the operator using force feedback.

The Torque Servo Module enables T-HR3's core capabilities: Flexible Joint Control, to control the force of contact the robot makes with any individuals or objects in its surrounding environment; Whole-body Coordination and Balance Control, to maintain the robot's balance if it collides with objects in its environment; and Real Remote Maneuvering, to give users seamless and intuitive control over the robot. These functions have broad implications for future robotics research and development, especially for robots that operate in environments where they must safely and precisely interact with their surroundings.

Since the 1980s, Toyota has been developing industrial robots to enhance its manufacturing processes. Partner Robot has utilized the insights from that experience and built on Toyota's expertise in automotive technologies to develop new mobility solutions that support doctors, caregivers and patients, the elderly, and people with disabilities.

T-HR3 will be featured at the upcoming International Robot Exhibition 2017 at Tokyo Big Sight from November 29 through December 2.

About Toyota Motor Corporation

Toyota Motor Corporation (TMC) is the global mobility company that introduced the Prius hybrid-electric car in 1997 and the first mass-produced fuel cell sedan, Mirai, in 2014. Headquartered in Toyota City, Japan, Toyota has been making cars since 1937. Today, Toyota proudly employs 370,000 employees in communities around the world. Together, they build around 10 million vehicles per year in 29 countries, from mainstream cars and premium vehicles to mini-vehicles and commercial trucks, and sell them in more than 170 countries under the brands Toyota, Lexus, Daihatsu and Hino. For more information, please visit www.toyota-global.com.

]]>
Tue, 21 Nov 2017 22:51:50 +0000
<![CDATA[BOSTON DYNAMICS' ATLAS ROBOT DOES BACKFLIPS NOW AND IT'S FULL-TILT INSANE]]>http://2045.com/news/35195.html35195ATLAS, THE HULKING humanoid robot from Boston Dynamics, now does backflips. I’ll repeat that. It’s a hulking humanoid that does backflips.

Check out the video below, because it shows a hulking humanoid doing a backflip. And that’s after it leaps from platform to platform, as if such behavior were becoming of a bipedal robot.

To be clear: Humanoids aren’t supposed to be able to do this. It's extremely difficult to make a bipedal robot that can move effectively, much less kick off a tumbling routine. The beauty of four-legged robots is that they balance easily, both at rest and as they’re moving, but bipeds like Atlas have to balance a bulky upper body on just two legs. Accordingly, you could argue that roboticists can better spend their time on non-human forms that are easier to master.

But there’s a case to be made for Atlas and the other bipeds like Cassie (which walks more like a bird than a human). We live in a world built for humans, so there may be situations where you want to deploy a robot that works like a human. If you have to explore a contaminated nuclear facility, for instance, you’ll want something that can climb stairs and ladders, and turn valves. So a humanoid may be the way to go.

If anything gets there, it’ll be Atlas. Over the years, it’s grown not only more backflippy but lighter and more dextrous and less prone to fall on its face. Even if it does tumble, it can now get back up on its own. So it’s not hard to see a future when Atlas does indeed tread where fleshy humans dare not. Especially now that Boston Dynamics is part of the Japanese megacorporation SoftBank, which mayhave some cash to spend.

While Atlas doing backflips is full-tilt insane, humanoids still struggle. Manipulation, for one, poses a big obstacle, because good luck replicating the human hand. And battery life is a nightmare, what with all the balancing. But who knows, maybe one day humanoids will flip into our lives, or at the very least at the Olympics.

]]>
Fri, 17 Nov 2017 22:40:01 +0000
<![CDATA[Speedy collision detector could make robots better human assistants]]>http://2045.com/news/35199.html35199Electrical engineers at the University of California San Diego have developed a faster collision detection algorithm that uses machine learning to help robots avoid moving objects and weave through complex, rapidly changing environments in real time. The algorithm, dubbed "Fastron," runs up to 8 times faster than existing collision detection algorithms.

A team of engineers, led by Michael Yip, a professor of electrical and computer engineering and member of the Contextual Robotics Institute at UC San Diego, will present the new algorithm at the first annual Conference on Robot Learning Nov. 13 to 15 at Google headquarters in Mountain View, Calif. The conference brings the top machine learning scientists to an invitation-only event. Yip's team will deliver one of the long talks during the 3-day conference.

The team envisions that Fastron will be broadly useful for robots that operate in human environments where they must be able to work with moving objects and people fluidly. One application they are exploring in particular is robot-assisted surgeries using the da Vinci Surgical System, in which a robotic arm would autonomously perform assistive tasks (suction, irrigation or pulling tissue back) without getting in the way of the surgeon-controlled arms or the patient's organs.

"This algorithm could help a robot assistant cooperate in surgery in a safe way," Yip said.

The team also envisions that Fastron can be used for robots that work at home for assisted living applications, as well as for computer graphics for the gaming and movie industry, where collision checking is often a bottleneck for most algorithms.

A problem with existing collision detection algorithms is that they are very computation-heavy. They spend a lot of time specifying all the points in a given space--the specific 3D geometries of the robot and obstacles--and performing collision checks on every single point to determine whether two bodies are intersecting at any given time. The computation gets even more demanding when obstacles are moving.

To lighten the computational load, Yip and his team in the Advanced Robotics and Controls Lab (ARClab) at UC San Diego developed a minimalistic approach to collision detection. The result was Fastron, an algorithm that uses machine learning strategies--which are traditionally used to classify objects--to classify collisions versus non-collisions in dynamic environments. "We actually don't need to know all the specific geometries and points. All we need to know is whether the robot's current position is in collision or not," said Nikhil Das, an electrical engineering Ph.D. student in Yip's group and the study's first author.

The name Fastron comes from combining Fast and Perceptron, which is a machine learning technique for performing classification. An important feature of Fastron is that it updates its classification boundaries very quickly to accommodate for moving scenes, something that has been challenging for the machine learning community in general to do.

Fastron's active learning strategy works using a feedback loop. It starts out by creating a model of the robot's configuration space, or C-space, which is the space showing all possible positions the robot can attain. Fastron models the C-space using just a sparse set of points, consisting of a small number of so-called collision points and collision-free points. The algorithm then defines a classification boundary between the collision and collision-free points--this boundary is essentially a rough outline of where the abstract obstacles are in the C-space. As obstacles move, the classification boundary changes. Rather than performing collision checks on each point in the C-space, as is done with other algorithms, Fastron intelligently selects checks near the boundaries. Once it classifies the collisions and non-collisions, the algorithm updates its classifier and then continues the cycle.

Because Fastron's models are more simplistic, the researchers set its collision checks to be more conservative. Since just a few points represent the entire space, Das explained, it's not always certain what's happening in the space between two points, so the team developed the algorithm to predict a collision in that space. "We leaned toward making a risk-averse model and essentially padded the workspace obstacles," Das said. This ensures that the robot can be tuned to be more conservative in sensitive environments like surgery, or for robots that work at home for assisted living.

The team has so far demonstrated the algorithm in computer simulations on robots and obstacles in simulation. Moving forward, the team is working to further improve the speed and accuracy of Fastron. Their goal is to implement Fastron in a robotic surgery and a homecare robot setting.

###

Paper title: "Fastron: An Online Learning-Based Model and Active Learning Strategy for Proxy Collision Detection." Authors of the study are Nikhil Das, Naman Gupta and Michael Yip in the Advanced Robotics and Controls Lab (ARClab) at UC San Diego.

]]>
Tue, 14 Nov 2017 22:54:49 +0000
<![CDATA[MIT’s remote control robot system puts VR to work]]>http://2045.com/news/35184.html35184MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has come up with a use for virtual reality headsets that goes beyond firing them up, checking out a new game, muttering “cool” briefly after 5 minutes of use and then putting them back in the closet: Controlling robots remotely for manufacturing jobs.

The CSAIL research project combines two things with questionable utility into one with real potential, marrying telepresence robotics and VR with manufacturing positions. The system gives the operator a number of ‘sensor displays’ to make it feel like they’re right inside the robot’s head on site, and even employed hand controllers to provide direct control over the robot’s grippers.

This system actually uses a simplified approach compared to a lot of 3D virtual simulated remote working environments, since it just takes the 2D images captured by the robot’s sensors, and displays them to each of the operator’s eyes. The operator’s brain does all the heady lifting of inferring 3D space – which makes the experience graphically light, and actually decreases queasiness and other negative effects.

CSAIL’s team called their robot Baxter, and operating Baxter also makes you feel as if you’re right inside its heads. It’s designed to create a “homunculus model of mind,” or the feeling that your’e a small human sitting in the brain of a large humanoid robot essentially piloting it – like a mech pilot might in, say, Guillermo Del Toro’s Pacific Rim.

Despite CSAIL’s unconventional approach, participants in the study had a higher success rate than with state-of-the-art, more complex alternatives, and gamers in particular were adept at this kind of remote control. MIT CSAIL even proposes it could potentially help put some of the growing population of young jobless gamers down a new career path in commercial use.

]]>
Mon, 2 Oct 2017 23:41:42 +0000
<![CDATA[Robots could destabilise world through war and unemployment, says UN]]>http://2045.com/news/35183.html35183United Nations opens new centre in Netherlands to monitor artificial intelligence and predict possible threats

The UN has warned that robots could destabilise the world ahead of the opening of a headquarters in The Hague to monitor developments in artificial intelligence.

From the risk of mass unemployment to the deployment of autonomous robotics by criminal organisations or rogue states, the new Centre for Artificial Intelligence and Robotics has been set the goal of second-guessing the possible threats.

It is estimated that 30% of jobs in Britain are potentially under threat from breakthroughs in artificial intelligence, according to the consultancy firm PwC. In some sectors half the jobs could go. A recent study by the International Bar Association claimed robotics could force governments to legislate for quotas of human workers.

Meanwhile nations seeking to develop autonomous weapons technology, with the capability to independently determine their courses of action without the need for human control, include the US, China, Russia and Israel.

Irakli Beridze, senior strategic adviser at the United Nations Interregional Crime and Justice Research Institute, said the new team based in the Netherlands would also seek to come up with ideas as to how advances in the field could be exploited to help achieve the UN’s targets. He also said there were great risks associated with developments in the technology that needed to be addressed.

“If societies do not adapt quickly enough, this can cause instability,” Beridze told the Dutch newspaper de Telegraaf. “One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organisations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this we want to start concrete projects. We will not be a talking club.”

In August more than 100 robotics and artificial intelligence leaders, including the billionaire head of Tesla, Elon Musk, urged the UN to take action against the dangers of the use of artificial intelligence in weaponry, sometimes referred to as “killer robots”.

They wrote: “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Last year Prof Stephen Hawking warned that powerful artificial intelligence would prove to be “either the best or the worst thing ever to happen to humanity”.

An agreement was sealed with the Dutch government earlier this year for the UN office, which will have a small staff in its early stages, to be based in The Hague.

Beridze said: “Various UN organisations have projects on robotic and artificial intelligence research, such as the expert group on autonomous military robots of the convention on conventional weapons. These are temporary initiatives.

“Our centre is the first permanent UN office for this theme. We look at both the risks and the benefits.”

]]>
Mon, 2 Oct 2017 23:40:09 +0000
<![CDATA[Are Computers Already Smarter Than Humans?]]>http://2045.com/news/35182.html35182Who’s smarter — you, or the computer or mobile device on which you’re reading this article? The answer is increasingly complex, and depends on definitions in flux. Computers are certainly more adept at solving quandaries that benefit from their unique skillset, but humans hold the edge on tasks that machines simply can’t perform. Not yet, anyway.

Computers can take in and process certain kinds of information much faster than we can. They can swirl that data around in their "brains," made of processors, and perform calculations to conjure multiple scenarios at superhuman speeds. For example, the best chess-trained computers can at this point strategize many moves ahead, problem-solving far more deftly than can the best chess-playing humans. Computers learn much more quickly, too, narrowing complex choices to the most optimal ones. Yes, humans also learn from mistakes, but when it comes to tackling the kinds of puzzles computers excel at, we're far more fallible.

Computers enjoy other advantages over people. They have better memories, so they can be fed a large amount of information, and can tap into all of it almost instantaneously. Computers don’t require sleep the way humans do, so they can calculate, analyze and perform tasks tirelessly and round the clock. Notwithstanding bugs or susceptibility to power blackouts, computers are simply more accurate at pulling off a broadening range of high-value functions than we are. They’re not affected or influenced by emotions, feelings, wants, needs and other factors that often cloud the judgement and intelligence of us mere mortals.

On the other hand, humans are still superior to computers in many ways. We perform tasks, make decisions, and solve problems based not just on our intelligence but on our massively parallel processing wetware — in abstract, what we like to call our instincts, our common sense, and perhaps most importantly, our life experiences. Computers can be programmed with vast libraries of information, but they can’t experience life the way we do. Humans possess traits we sometimes refer to (again, in the abstract) as creativity, imagination and inspiration. A person can write a poem, compose and play music, sing a song, create a painting or dream up a new invention. Computers can be programmed to replicate some of those tasks, but they don’t possess the innate ability to create the way humans do.

What do experts in artificial intelligence make of all this? Let's start by defining what we mean by "smarter" or "more intelligent." Intelligence has two components, says Professor Shlomo Maital, Senior Research Fellow for the S. Neaman Institute at Technion - Israel Institute of Technology. One is the ability to learn, the other is the ability to solve problems. And in those areas, computers can be smarter than humans.

“Today, computers can learn faster than humans, e.g., (IBM’s) Watson can read and remember all the research on cancer, no human could,” says Maital. “With deep learning, Watson can also solve a problem, for example, how to treat a rare form of cancer — and it has done so. So in that sense, computers can be smarter than humans.”

Maital points to another example of computer intelligence in his article “Will robots soon be smarter than humans?” On February 10, 1996, IBM’s Deep Blue computer defeated world champion Garry Kasparov in the first of a six-game series, going on to eventually win the series a year later — the first computer ever to do so. Was Deep Blue intelligent? Yes and no, says Maital.

“No, because it was simply able to calculate an enormous number of possible chess moves in a fraction of a second,” writes Maital. “Speed is not intelligence. But, yes, because it was able to analyze these chess moves and pick the best one sufficiently well to beat Kasparov.”

Computers don’t suffer from important limitations that plague human beings. They’re not restricted by biology, they don’t get tired, they can crunch numbers for long hours, and they’re exceptionally smart while doing repetitive mathematical tasks, according to Satya Mallick from LearnOpenCV.com and the founder of Big Vision LLC.

“From an A.I. perspective, we can now train computers to perform better than humans in many tasks, for instance some visual recognition tasks,” says Mallick. “These tasks have one thing in common: there is a vast amount of data we can gather to solve these tasks and/or they are repetitive tasks. Any repetitive task that creates a lot of data will eventually be learned by computers.”

But experts agree that humans still tower over computers in general intelligence, creativity, and a common-sense knowledge or understanding of the world.

“Computers can outperform humans on certain specialized tasks, such as playing [the game] go or chess, but no computer program today can match human general intelligence,” says Murray Shanahan, Professor of Cognitive Robotics for the Department of Computing at Imperial College in London. “Humans learn to achieve many different types of goals in a huge variety of environments. We don't yet know how to endow computers with the kind of common sense understanding of the everyday world that underpins human general intelligence, although I'm sure we will succeed in doing this one day.”

People possess creativity and intuition, both qualities that computer code doesn’t have, but more importantly may never have, according to John Grohol, founder & CEO of PsychCentral.com.

“We can, for instance, have computers mimic creativity through subsuming works of art into a database, and then creating a new work of ‘art’ from some amalgamation,” says Grohol. “But is that the same as human creativity, or is the computer's code simply following an instruction set? I'd argue it's very much just the latter, which makes the computer far inferior when it comes to that component of intelligence.”

Computers have no concept of meaning the way a human does, says Jana Eggers, CEO of artificial intelligence company Nara Logics. “Even if the computer can determine an emotion, it does not understand what experiencing an emotion means,” according to Eggers. “Will they? It is possible, but not clear how that will work with the current forms of computing.”

But what if we roll the clock far enough ahead? Experts generally agree that the computers of tomorrow will possess some of the traits that today are seen as uniquely human.

“The human brain has 86 billion neurons (nerve cells), all interconnected,” says Maital. “Computer neural networks have far, far fewer ‘cells.’ But one day such neural networks will reach the complexity and sophistication of the brain.”

All of this is likely coming sooner than later, believes Grohol. “Once we've cracked the neurocode that runs our brains, I believe we could replicate that structure and function artificially, so we could truly create artificial life with artificial intelligence,” he says. “I could definitely see that happening within the next century.

Some people, such as computer scientist Ray Kurzweil and Tesla co-founder Elon Musk, have warned against the potential dangers of A.I., envisioning a Terminator-type future in which machines have run amok. We certainly need to keep a handle on artificial intelligence so that we control the machines rather than the other way around. But the question seems less one of Hollywood-style "evil" machines rising up to exterminate puny humans, than of alignment: how do we ensure that machine intelligence that may eventually be utterly beyond our comprehension remains fully aligned with our own?

Some of that's rethinking how we approach these questions. Rather than obsessing over who’s smarter or irrationally fearing the technology, we need to remember that computers and machines are designed to improve our lives, just as IBM’s Watson computer is helping us in the fight against deadly diseases. The trick, as computers become better and better at these and any number of other tasks, is ensuring that "helping us" remains their prime directive.

“The important thing to keep in mind is that it is not man versus machine,” says Mallick. “It is not a competition. It is a collaboration.”

]]>
Fri, 29 Sep 2017 23:33:19 +0000
<![CDATA[Deus ex machina: former Google engineer is developing an AI god]]>http://2045.com/news/35180.html35180Way of the Future, a religious group founded by Anthony Levandowski, wants to create a deity based on artificial intelligence for the betterment of society

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

As author Yuval Noah Harari notes: “That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.”

Religions, Harari argues, must keep up with the technological advancements of the day or they become irrelevant, unable to answer or understand the quandaries facing their disciples.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

“And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

]]>
Thu, 28 Sep 2017 23:28:07 +0000
<![CDATA[This robotic glove will give you bionic hands]]>http://2045.com/news/35179.html35179A startup called Nuada has developed a soft, robotic glove that gives people with hand pain or weakness a strong grip.

According to co-founders Filipe Quinaz and Vitor Crespo, the glove contains a "mesh" of artificial tendons and sensors. These are controlled by an electromechanical system contained in a smartwatch-like device worn on the same hand.

A user activates the glove by lightly flexing their wrist. The glove then understands that they want an assist, and can help them with any movement. Typically, users will employ the glove to pick up, maneuver or hold a heavy object whether that's a bag of groceries or a car battery.

A more advanced (and expensive) version of the Nuada glove can predict the wearer's movement and assist them automatically. The advanced version also work with a mobile app that gathers data about their hand's activity. A physical therapist, for example, could review the data and help the wearer to become more ergonomically healthy.

Nuada began with the idea of creating a medical-grade prosthetic, Quinaz told CNBC. But the company decided to develop more of a generally helpful tool after hearing tremendous demand from employers whose staff do a lot of manual labor, and seniors with weakness in their hands.

Hand injuries befell more than 140,000 workers in the U.S. alone in 2015, according to the most recent available data from the U.S. Department of Labor. And repetitive, physical tasks at work—whether you're a camera operator, EMT or waiter – can cause fatigue and pain even in healthy workers. Meanwhile, arthritis of the hand effects tens of millions in the U.S. alone.

Based in Braga, Portugal, the company is currently testing its robo-gloves with large employers there, including a Volkswagen factory and the retail conglomerate Sonae. The company has raised seed funding from the hardware accelerator HAX and its affiliated venture firm SOSV.

Video

]]>
Sat, 23 Sep 2017 08:50:01 +0000
<![CDATA[Ambitious neuroscience project to probe how the brain makes decisions]]>http://2045.com/news/35181.html35181World-leading neuroscientists have launched an ambitious project to answer one of the greatest mysteries of all time: how the brain decides what to do.

The international effort will draw on expertise from 21 labs in the US and Europe to uncover for the first time where, when, and how neurons in the brain take information from the outside world, make sense of it, and work out how to respond.

If the researchers can unravel what happens in detail, it would mark a dramatic leap forward in scientists’ understanding of a process that lies at the heart of life, and which ultimately has implications for intelligence and free will.

“Life is about making decisions,” said Alexandre Pouget, a neuroscientist involved in the project at the University of Geneva. “It’s one decision after another, on every time scale, from the most mundane thing to the most fundamental in your life. It is the essence of what the brain is about.”

Backed with an initial £10m ($14m) from the US-based Simons Foundation and the Wellcome Trust, the endeavour will bring neuroscientists together into a virtual research group called the International Brain Laboratory (IBL). Half of the IBL researchers will perform experiments and the other half will focus on theoretical models of how the brain makes up its mind.

The IBL was born largely out the realisation that many problems in modern neuroscience are too hard for a single lab to crack. But the founding scientists are also frustrated at how research is done today. While many neuroscientists work on the same problems, labs differ in the experiments and data analyses they run, often making it impossible to compare results across labs and build up a confident picture of what is really happening in the brain.

“It happens all the time that we read a paper that gets different results from us, and we won’t know if it’s for deep scientific reasons, or because there are small differences in the way the science is carried out,” said Anne Churchland, a neuroscientist involved in the project at Cold Spring Harbor Lab in New York. “At the moment, each lab has its own way of doing things.”

The IBL hopes to overcome these flaws. Scientists on the project will work on exactly the same problems in precisely the same way. Animal experiments, for example, will use one strain of mouse, and all will be trained, tested and scored in the same way. It is an obvious strategy, but not a common one in science: in any lab, there is a constant urge to tweak experiments to make them better. “Ultimately, the reason it’s worth addressing is in the proverb: ‘alone we go fast, together we go far’,” said Churchland.

The IBL’s results will be analysed with the same software and shared with other members immediately. The openness mirrors the way physicists work at Cern, the particle physics laboratory near Geneva that is home to the Large Hadron Collider. For now, the IBL team includes researchers from UCL, Princeton, Stanford, Columbia, Ecole Normale Paris, and the Champalimaud Centre in Lisbon, but over the 10 to 15-year project, more scientists are expected to join.

Decision-making is a field in itself, so IBL researchers will focus on simple, so-called perceptual decisions: those that involve responding to sights or sounds, for example. In one standard test, scientists will record how neurons fire in mice as they watch faint dots appear on a screen and spin a Lego wheel to indicate if the dots are on the left or the right. The mice make mistakes when the dots are faint, and it is these marginal calls that are most interesting to scientists.

Matteo Carandini, a neuroscientist involved in the IBL at University College London, compares the task to a cyclist approaching traffic lights in the rain. “If the light is green, you go, and if it’s red, you stop, but there’s often uncertainty. Very often you see only a bit of red, you’re not sure it’s even a traffic light, but you need to make a decision.”

Modern neuroscience textbooks have only a coarse description of how perceptual decisions are made. When light from a traffic light hits the eye, the retina converts it into electrical impulses that are sent to the visual cortex. The image is interpreted, and at some point a decision is made whether or not to fire neurons in the motor cortex and move in response. By recording from thousands of neurons throughout the mouse brain, IBL scientists hope to learn how and when neurons are pulled into the process.

The IBL has not set its sights on explaining complex decisions: which flat to rent, who to partner up with, who to vote for. But it is a start. When it comes to human responses to the outside world, neuroscience cannot explain much beyond the knee-jerk response and ejaculation.

“What people often don’t realise is that we have no clue how the brain works,” said Carandini.

]]>
Tue, 19 Sep 2017 23:30:43 +0000
<![CDATA[How to draw electricity from the bloodstream]]>http://2045.com/news/35185.html35185Men build dams and huge turbines to turn the energy of waterfalls and tides into electricity. To produce hydropower on a much smaller scale, Chinese scientists have now developed a lightweight power generator based on carbon nanotube fibers suitable to convert even the energy of flowing blood in blood vessels into electricity. They describe their innovation in the journal Angewandte Chemie.

For thousands of years, people have used the energy of flowing or falling water for their purposes, first to power mechanical engines such as watermills, then to generate electricity by exploiting height differences in the landscape or sea tides. Using naturally flowing water as a sustainable power source has the advantage that there are (almost) no dependencies on weather or daylight. Even flexible, minute power generators that make use of the flow of biological fluids are conceivable. How such a system could work is explained by a research team from Fudan University in Shanghai, China. Huisheng Peng and his co-workers have developed a fiber with a thickness of less than a millimeter that generates electrical power when surrounded by flowing saline solution--in a thin tube or even in a blood vessel.

The construction principle of the fiber is quite simple. An ordered array of carbon nanotubes was continuously wrapped around a polymeric core. Carbon nanotubes are well known to be electroactive and mechanically stable; they can be spun and aligned in sheets. In the as-prepared electroactive threads, the carbon nanotube sheets coated the fiber core with a thickness of less than half a micron. For power generation, the thread or "fiber-shaped fluidic nanogenerator" (FFNG), as the authors call it, was connected to electrodes and immersed into flowing water or simply repeatedly dipped into a saline solution. "The electricity was derived from the relative movement between the FFNG and the solution," the scientists explained. According to the theory, an electrical double layer is created around the fiber, and then the flowing solution distorts the symmetrical charge distribution, generating an electricity gradient along the long axis.

The power output efficiency of this system was high. Compared with other types of miniature energy-harvesting devices, the FFNG was reported to show a superior power conversion efficiency of more than 20%. Other advantages are elasticity, tunability, lightweight, and one-dimensionality, thus offering prospects of exciting technological applications. The FFNG can be made stretchable just by spinning the sheets around an elastic fiber substrate. If woven into fabrics, wearable electronics become thus a very interesting option for FFNG application. Another exciting application is the harvesting of electrical energy from the bloodstream for medical applications. First tests with frog nerves proved to be successful.

]]>
Fri, 8 Sep 2017 23:45:01 +0000
<![CDATA[This vacuum-activated modular robot is equally nasty and neat]]>http://2045.com/news/35174.html35174Soft robots are a major area of research right now, but the general paradigm seems to be that you pump something (a muscle or tube) full of something else (air, fluid) causing it to change its shape. But a robot from Swiss roboticists does the opposite: its little muscles tense when the air in them is removed. It’s cool — but it also looks kind of gross.

Each little section has several muscles, each of which can be contracted to different degrees to twist that section and exert force in some direction. All in one direction and the bot bends over; do it rhythmically and it can walk, or at any rate wriggle along. Or the vacuum could be let out in a suction cup, allowing the bot to attach securely to a wall, as  you see up top.

It was developed by Jamie Paik and Matt Robertson at the École Polytechnique Fédérale de Lausanne (EPFL), who describe it in a paper published in the new journal Science Robotics.

And although other robot-like devices have used vacuum for various purposes — we had one on stage that used vacuum to safely grip fragile items — the researchers claim this is the first bot that works entirely by vacuum. The contractive action created by the vacuum isn’t just unique, it’s practical, Paik told me in an email.

Bending. I warned you it looked gross!

“Compared to expanding actuators, contraction is more similar to the function of biological muscle,” she said. “Without going in to more precise and detailed mimicry, this might be functionally enough an advantage in terms of applications; to mimic real muscles in cases when you’d like to work with/augment/assist body joints (as in wearable devices), and not introduce other modes of forces or motion that might impede natural function.”

It’s also totally modular, so if you want fingers or an arm made out of them, that works, and a huge, horrible snake of them is an option too. (I’d prefer you didn’t.)

“The full range of geometry and performance possible is still under investigation, but many other shapes have been tested in our lab, and the general idea is still open to many more,” wrote Paik. “Ultimately, this modular kit would be a household staple tool to automate objects or execute simple but diverse tasks (holding a nail while hammering, cleaning a refrigerator overnight, looking for lost objects around the house). Or it would be building blocks for an active wearable robots that can assist/give feedback to the user.”

Currently that testing is all manual — you have to assemble each piece and test it by hand — but the team is working on automated tools that could virtually assemble and test different configurations.

The downside of this technique is that, because vacuum pumps aren’t exactly lightweight or portable, the robot must remain tethered to one.

“Pneumatic pumps have not been optimized for portability, since they are usually used in fixed settings,” Paik explained. “Hopefully these will improve as quickly as quadrotor technology has.”

It must be said that it’s not quite as sexy as a drone you can fly in your backyard, but if Paik and Robertson’s ideas pan out, this could be a precursor to a technology as ubiquitous as those drones.

]]>
Fri, 1 Sep 2017 21:07:56 +0000
<![CDATA[A Bionic Lens Undergoing Clinical Trials Could Give You Superhuman Abilities In Two Years]]>http://2045.com/news/35176.html35176Maybe you watched Ghost in the Shell and maybe afterwards you and your friend had a conversation about whether or not you would opt in for some bionic upgrades if that was possible - like a liver that could let you drink unlimitedly or an eye that could give you superhuman vision. And maybe you had differing opinions but you concluded that it's irrelevant because the time to make such choices is far in the future. Well, it turns out, it’s two years away. 

А Canadian company called Ocumetics Technology Corporation1 is currently doing clinical testing for their Bionic Lens - a medical device that could make glasses and contact lenses obsolete. If everything goes smoothly, the lens could be in your eye-surgeon’s hands and in your eyes in two years. And the capabilities it will give you are truly mind-blowing.

The Bionic Lens is a dynamic lens that replaces the natural lens inside the eye via one of the most common and successful procedures in medicine - cataract surgery. Once there, the lens restores clear vision at all distances without any visual quality problems. It can auto-regulate within the eye by connecting to the muscles that change the curvature of our natural lenses, which allows it to focus at different ranges - potentially much wider ranges than our natural sight is capable of. In addition, because the Bionic Lens responds with less than 1/100 the amount of energy of the natural lens, you can also focus on something all day without any strain on the eyes. 

The Bionic Lens could improve on the 20/20 vision threefold. Imagine that you can see a clock’s dial 10 feet away. With the lens you would be able to see the dial in the same detail when it is 30 feet away. What happens when you combine the super sharp focus and the ability to tune the lens to improve sight well beyond the capabilities of the eye, is that you can see really sharp details at very close distances. If you looked at a tiny sliver of your finger, for example, you would be able to see the cellular detail in it. 

What is even more exciting is that the lens is developed with components that allow for further easy access to it and the ability for upgrades and modifications. Like, for example, installing projection systems that will give the user capabilities of projecting their phone screen, or integrating NASA technologies to allow for better focusing resolution than anything seen before, or even installing a system that allows for slow drug delivery inside the eye.

Dr. Garth Webb, the sole innovator behind the Bionic Lens and an optometrist with over 40 years of experience says:

“We have developed the Bionic lens to, in its default mode, make our lives function better in their normal realm and in its augmented capacity to allow for us to integrate seamlessly with the entire digital world. […] My humble perception is, that us human beings will be the center of artificialintelligence activity. So, I believe that we are going to filter and chaperonartificial intelligence that will be either around our head, or on our watch, or maybe both. So, it is, if you will, augmenting the human beyond what wenormally anticipate.” 

Commenting on the dark side of this technology, Webb notes that, in fact, its absence is what will eventually become the problem, as it provides “unfair” advantage to those who have it.

The early adopters will have to pay about $3200 per lens, excluding the cost of the surgery. The company has already started compiling a list of clinics and surgeons, via referrals, that it will work with.

The Bionic Lens will not be a panacea for all types of eye conditions. It can’t treat color-blindness, cloudy corneas, severe macular degeneration, severe genetic retinal diseases or torn or damaged optic nerves. It does provide, however, an upgraded version of our own biological lens, which inevitably deteriorates with age.

Below you can watch Dr. Garth Webb’s full presentation of this exciting new invention at the Superhuman Summit 2016. 

]]>
Thu, 31 Aug 2017 21:13:36 +0000
<![CDATA[A robot that will replace your smartphone is already in the works]]>http://2045.com/news/35175.html35175One day, we will all have robots instead of smartphones.

The life-like droids will advise you on various matters, help you buy things, and even make your coffee just the way you like it.

That’s the forecast from some of the top minds in robotics and artificial intelligence who gathered in Pebble Beach, California, last week to debate the future at the G-Summit conference organized by GWC.  

The group of scientists and researchers celebrated all the latest advances in the field of robotics, even as they acknowledged the limitations of today’s specimens.

A lot of people want C-3PO — the intelligent and affable droid in the Star Wars films — but if you’re expecting C-3PO today you’ll be disappointed, said Steve Carlin, the chief strategy officer of SoftBank Robotics America, which makes the 4-foot tall, human-shaped "Pepper" robot.

Pepper can do some nifty things like recognize different people's faces and greet customers at Pizza Hut, but it can’t wander around the neighborhood on its own. Similarly, the fearsome mechanical creatures developed by Boston Dynamics can climb stairs and tramp through the snow, but don't expect to have late-night conversations about the meaning of life with these droids. And the boxy machines developed by Amazon Robotics can glide across warehouse floors, impressively moving merchandise around, but that’s all they can do — they are one-trick ponies.

So what makes the leading lights of AI and tech research so certain that we'll eventually get C-3PO?

The answer is in your hands, or rather, in the smartphone that's in your hands.

A smartphone is a combination of different technologies, all of which evolved separately and according to their own timelines. Eventually all the technologies matured to a sophisticated enough stage and it was possible to merge them together. That's when the smartphone was born.

A robot is no different. It’s essentially an embodiment of various very complex technologies including speech recognition, visual computing and mechanical engineering, among other things. When each one of these components attains sufficient maturity, they can be combined to create a “universal” robot akin to C-3PO, said Dmitry Grishin, who cofounded Russian internet company Mail.Ru and is now the CEO of Grishin Robotics.

Grishin didn't say where we currently are, based on this smartphone analogy, in the evolution of a "universal robot."

But if you take stock of the various components inside your smartphone, you can get a rough idea of how long such a process may take:

  • The first radio transmitter was created in 1887.
  • The first commercial photographic camera was produced in 1839. 
  • The first cathode ray tube, the precursor to today's digital display, was created in 1897. 
  • The first integrated circuit dates back to 1949. 

Of course, there are lots of other components required to make a smartphone. But as you can see, it took more than 100 years from the advent of some of the first key technologies needed for a smartphone until we arrived at the 2007 introduction of the iPhone

Does that mean we're still 100 years away from the robots of science fiction? Not necessarily. Tech development has accelerated at an exponential pace and gathers momentum with each new innovation.

The good news is, your all-purpose, super-intelligent C-3PO robot is coming. But don't throw away your smartphone just yet.

]]>
Sun, 27 Aug 2017 21:10:32 +0000
<![CDATA[Silicon Valley is selling an ancient dream of immortality]]>http://2045.com/news/35177.html35177In 1999, the futurist Ray Kurzweil published a book entitled The Age of Spiritual Machines. He looked forward to a future in which the “human species, along with the computational technology it created, will be able to solve age-old problems . . . and will be in a position to change the nature of mortality.”

Mr Kurzweil is now an executive at Google, one of whose co-founders, Larry Page, launched a start-up, Calico, in 2013 with the aim of harnessing advanced technologies that will enable people to “lead longer and healthier lives”.

Death is not just something the Chinese have invented to make America less competitive. Empirical evidence and rational argument converge on the fact that all of us will eventually disappear for good Others go even further. Aubrey de Grey, co-founder of Strategies for Engineered Negligible Senescence, a research centre, believes that ageing is just an engineering problem.

Technological progress, he maintains, will eventually enable human beings to achieve what he calls “life extension escape velocity”. As for Mr Kurzweil, earlier this year he announced that he had “set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created”.

In this “transhumanist” vision, once we turn ourselves into “spiritual machines”, we will be able to live forever.

Although mortality denial is currently fashionable in Silicon Valley, it is not new. On the contrary, it is one of the most successful products ever designed and has been on the market for millennia.

Because human beings are the only animals to have evolved an explicit, consciously experienced insight into their own finitude, there is a robust and enduring demand for this particular psychological sleight of hand.

Read article...

]]>
Tue, 22 Aug 2017 21:18:53 +0000
<![CDATA[University of Adelaide test dragonfly neuron for artificial vision system in driverless cars]]>http://2045.com/news/35170.html35170A dragonfly's ability to predict the movement of its prey is being harnessed to improve the way driverless cars manoeuvre in traffic.

Researchers from the University of Adelaide and Lund University in Sweden have found a neuron in dragonfly brains that anticipates movement.

The properties of the target-detecting neurons are being replicated in a small robot in Adelaide to test its potential for artificial vision systems used in driverless cars.

South Australia has a history of involvement with autonomous car research and in 2015 held the first driverless car trials in the southern hemisphere.

The University of Adelaide's autonomous robot testing its sensing techniques derived from dragonflies. UNIVERSITY OF ADELAIDE

It hosts a number of autonomous car companies including Cohda Wireless, which enables vehicles to communicate with the infrastructure around them using its V2X ('vehicle to everything') technology, and RDM Group, a UK driverless car maker which opened its Asia-Pacific headquarters in Adelaide earlier this year.

The new discovery could add momentum to the emerging local industry, said research supervisor and lecturer at the University of Adelaide's Medical School, Steven Wiederman.

"It is one thing for artificial systems to be able to see moving targets, but tracing movement so it can move out of the way of those things is a really important aspect to self-steering vehicles," he said.

Mr Wiederman said the local driverless car engineers were interested in a "hydrid" of their existing computer vision models with algorithms drawn from nature, which could respond better in "unstructured, unpredictable" environments.

"What we found was the neuron in dragonflies not only predicted where a target would reappear, it also traced movement from one eye to the other – even across the brain hemispheres.

"This is also evident in cluttered environments where an object might be difficult to distinguish from the background," he said.

The research team, led by University of Adelaide PhD student Joseph Fabian, found that target-detecting neurons increased dragonfly responses in a small "focus" area just in front of the location of a moving object being tracked.

If the object then disappeared from the field of vision, the focus spread forward over time, allowing the brain to predict where the target was most likely to reappear.

The neuronal prediction was based on the previous path along which the prey had flown.

Dr Wiederman said this phenomenon was not only evident when dragonflies hunted small prey but when they chased after a mate as well.

This is similar to when a human judges the trajectory of a ball as it is thrown to them, even when it is moving against the backdrop of a cheering crowd.

The research project is the first time a target-tracking model inspired by insect neurophysiology has been installed on an autonomous robot and tested under real-world conditions.

The study of the neuron, known as CSTMD1, was published on Tuesday in the journal eLife.

Researcher Zahra Bagheri from the University of Adelaide said there was growing interest in the use of robots for applications in industry, health and medical services, and entertainment products.

"However, our robots are still far behind the accuracy, efficiency and adaptability of the algorithms which exist in biological systems," she said.

"Nature provides a proof of concept that practical real-world solutions exist, and with millions of years of evolution behind them, these solutions are highly efficient," she said.

A study on the implementation of CSTMD1 into the robot was published earlier this month in the Journal of Neural Engineering.

The research project is an international collaboration funded by the Swedish Research Council, the Australian Research Council and STINT, the Swedish Foundation for International Cooperation in Research and Higher Education.

Previously, Dr Wiederman and his research team demonstrated that bees had vision up to 30 per cent better than previous studies suggested.

This finding has also been beneficial in improving the vision of robots.

]]>
Fri, 28 Jul 2017 19:58:15 +0000
<![CDATA[This slug slime-inspired glue can patch up bloody pig hearts and gooey rat livers]]>http://2045.com/news/35168.html35168A new class of tissue glues can seal a punctured pig heart, new research says. Called Tough Adhesives, these new glues could one day help close up wounds in the hard-to-reach, slimy depths of our bodies. That’s still a ways away, however. So far, they’ve mainly been tested on the blood-covered skin and beating heart of a pig.

The research is part of a bigger push to develop tissue adhesives that can safely and effectively seal up internal cuts and holes left by trauma, surgery, or birth defects. Right now, a patient’s options are pretty limited to sutures and staples, which can be challenging to use in hard-to-reach, internal places. Medical-grade super glue can only work on dry surfaces like skin. It also dries too stiffly, and is too toxic to use inside the body. The other tissue adhesives on the market just don’t stick well enough, experts say.

Developed by a team of scientists at Harvard University, these new Tough Adhesives can stay sticky even in soggy environments, according to a new study published today in the journal Science. They also don’t appear to be toxic to human cells. That’s key for a glue designed to be used inside the human body. The researchers used their Tough Adhesives to successfully close up a hole in a beating pig heart (the pig was dead, and the heart was artificially made to beat with a machine). It also stopped a slimy rat liver from bleeding, and it stuck to pig skin and a real live pig heart that had been dribbled with blood. “That’s the fun part,” says the study’s lead author, Harvard postdoctoral fellow Jianyu Li.

“It’s an elegant piece of work,” says Mark Grinstaff, a chemistry professor at Boston University and founder of tissue adhesives company Hyperbranch, who was not involved in the research. The fact that the glue worked even on the rat liver was noteworthy, he says: “The liver is really hard to get materials to stick to because it has a really slimy surface.” Jeff Karp, a researcher at Brigham and Women’s Hospital and founder of another tissue adhesives company, Gecko Biomedical, agrees. He cautions, however, that translating a new tissue glue to the clinic isn’t easy.

Many scientists trying to develop better adhesives take inspiration from naturally sticky creatures, like mussels that glue themselves to rocks. The Harvard team, working in bioengineer David Mooney’s lab, looked to the Dusky Arion, a two-inch-long slug that oozes a sticky, defensive mucus. This mucus has two key components: an adhesive that anchors to surfaces, mixed into a kind of flexible gel.

The researchers didn’t actually use any of the molecules in the slug mucus, lead author Li clarifies. Instead, they used a similar and intuitive design strategy: mixing a sticky ingredient with something that can stretch and withstand the stresses exerted by a moving body. The sticky ingredients were ones that already exist, like the gelatin you’d find in your jello, connective tissue extracted from a rat tail, a compound found in shrimp shells, and two synthetic molecules. (The shrimp shell molecule, they found, was among the stickiest.)

For the stretchable, shock-absorbing material, the researchers used something called a hydrogel, which is basically a chemical soup mostly made out of water with a drizzle of a molecule found in algae. Layering these two components created several different versions of these Tough Adhesives. They dried in minutes even on bloody surfaces, and could withstand the forces from thousands of heart contractions.

“It’s very promising technology,” Karp says. The challenge, he adds, will be translating it to the clinic. One hurdle these new glues will have to overcome is manufacturing, he says. Actually taking a product to market requires making it on a large scale, and making sure it doesn’t go bad when it’s sitting on a shelf, waiting to be used. “Manufacturing has been a killer for a lot of these technologies,” he says. Another barrier can be convincing clinicians to actually use the new glues that may be difficult to apply and are different from what they’re used to. But there’s a huge unmet need for this kind of technology, and Karp says this is a promising approach.

]]>
Thu, 27 Jul 2017 19:48:09 +0000
<![CDATA[This swimming robot may have finally spotted melted nuclear fuel inside Fukushima]]>http://2045.com/news/35167.html35167A robot swimming in the depths of one of Fukushima’s nuclear reactors may have spotted lumps of molten nuclear fuel inside. If it did, it would be the first robot to successfully locatethe radioactive material, as efforts to clean up after the 2011 nuclear accident at the Fukushima Daiichi power plant in Japan continue.

This latest robotic investigator, nicknamed the Little Sunfish, was sent into the Unit 3 reactor for the first time on July 19th. That’s one of the three nuclear reactors that melted down after a massive earthquake and tsunami struck Japan in 2011. Based on earlier surveys, the plant operator, Tokyo Electric Power Company or TEPCO, suspects that the melted fuel in Unit 3might have burned through the bottom of its container and dropped into what’s called the primary containment vessel. That’s what shields the outside world from the radioactive materials inside.

Powered by five propellers and sporting a camera on its front and back ends, the football-sized robot was remotely operated via a tether attached to its rear. On its first trip, the Little Sunfish successfully navigated underwater. And on its second visit a few days later, the Little Sunfish snapped photos of what look like hardened lumps of lava that may contain melted nuclear fuel. Experts will need to analyze the photos to be sure, but a TEPCO spokesperson told the Japan Times, "There is a high possibility that the solidified objects are mixtures of melted metal and fuel that fell from the vessel.”

Finding this massive source of radiation is among the first challenges TEPCO will need to overcome in order to decommission the plant. Nuclear power plants are fueled by pellets of uranium, packed together inside hollow metal rods “like peas in a pod,” according to the Union of Concerned Scientists. These fuel rods are part of the nuclear reactor’s core, which keeps producing heat even after the reactor shuts down. That’s why it’s so important to keep nuclear reactors cool: if temperatures climb too high, the reactor core can melt into a kind of radioactive lava.

When the tsunami flooded the Fukushima Daiichi plant after the 2011 earthquake, it took out the backup power generators and cooling systems. Over the next three days, the reactors melted down — and since then, the plant operator has been hunting for the molten messes of metal and radioactive fuel left behind.

So far, at least seven of the robots sent to investigate the reactors at Fukushima Daiichi have broken down during their missions. One robot’s camera was fried by high levels of radiationanother got caught on debris and abandoned. The Little Sunfish successfully made the trip not once, but twice into Unit 3. Attempts to remove the melted radioactive fuel probably won’t even start until after 2020, the Associated Press reports — but this small win could be a sign that robots might be able to help the cleanup efforts, after all.

]]>
Wed, 26 Jul 2017 19:44:24 +0000
<![CDATA[MIT’s Cheetah 3 robot is built to save lives]]>http://2045.com/news/35161.html35161The latest version of MIT’s Cheetah robot made its stage debut today at TC Sessions: Robotics in Cambridge, Mass. It’s a familiar project to anyone who follows the industry with any sort of regularity, as one of the most impressive demos to come out of one of the world’s foremost robotics schools in recent years. Earlier versions of the four-legged robot have been able to run at speeds up to 14 miles an hourbound over objects autonomously and even respond to questions with Alexa, by way of an Echo Dot mounted on its back.

The Cheetah 3, however, marks a kind of philosophical change for the robot created by professor Sang-bae Kim and his team at MIT’s Biomimetics lab. The focus has shifted from impressive demos to something more practical — this time out, the team is focused on giving the world a robot that can perform search and rescue.

“Our vision changed to wanting to use this in a real situation, to dispatch it to Fukushima,” Kim told TechCrunch ahead of the event. “We want to use this in a place where we don’t want to use humans. We can use the robot to monitor the environment and other emergency situations. There are a lot of emergency situations where you just want to do a routine check.” 

Post-nuclear disaster Fukushima Japan is often brought up in these discussions around where industrial robots can be useful in the real world, and indeed, a number of robots have already been deployed to site, going where humans can’t — or at least shouldn’t. iRobot/Endeavor’s Packbot has done some work surveying the site, but the Cheetah 3 is able to do things that more traditional wheeled robots can’t, owed in part to its animal-inspired, four-legged build.

“I’ve been fascinated by developing legged machines, which can go where real machines cannot go,” explained Kim. “As mankind, we’ve conquered air, water, ground — all of these categories, but we conquered ground in a different way. We modified the ground for our wheels.”

And it makes sense. It’s the same reason so many roboticists continue to be drawn to human- and animal-inspired robots. We’ve built our environment with us in mind, so a robot drawing on similar evolutionary source material will probably do a better job navigating around. In the case of the Cheetah, that means walking around rubble and up stairs. The company also demoed the new Cheetah’s ability to balance on three legs, using the fourth as a sort of makeshift arm. It’s still in the early stages, but the team is working on a dexterous hand that can perform complex tasks like opening doors — velociraptors eat your hearts out.

 

The new Cheetah design also makes it more capable of carrying payloads — and if this is all starting to sound like what Boston Dynamics has been working on with robots like Big Dog, it’s no coincidence. Both projects were born out of the same DARPA funding. Though, unlike Boston Dynamics’ work, Kim points out, the Cheetah project has used electric motors (rather than hydraulics) all along. Though Boston Dynamics introduced that functionality as well with the Spot and Spot Mini.

Kim is careful to remind me that this is all still early stages — after all, today’s event is Cheetah 3’s big public debut. For now, however, the team is taking a more pragmatic approach. “We’re doing the easy things first,” Kim explained with a laugh. The robot is current being tested across the MIT campus, traversing hills and walking up stairs. Next year, the company will push the Cheetah even further. It took the functions out in the new versions but will be adding them back into the 3 later.

]]>
Mon, 17 Jul 2017 23:00:51 +0000
<![CDATA[Rice team developing flat microscope for the brain]]>http://2045.com/news/35160.html35160Rice University engineers are building a flat microscope, called FlatScope, and developing software that can decode and trigger neurons on the surface of the brain.

Their goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain.

The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person's loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.

Members of Rice's Electrical and Computer Engineering Department will focus first on vision. They will receive $4 million over four years to develop an optical hardware and software interface. The optical interface will detect signals from modified neurons that generate light when they are active. The project is a collaboration with the Yale University-affiliated John B. Pierce Laboratory led by neuroscientist Vincent Pieribone.

Current probes that monitor and deliver signals to neurons—for instance, to treat Parkinson's disease or epilepsy—are extremely limited, according to the Rice team. "State-of-the-art systems have only 16 electrodes, and that creates a real practical limit on how well we can capture and represent information from the brain," Rice engineer Jacob Robinson said.

Robinson and Rice colleagues Richard Baraniuk, Ashok Veeraraghavan and Caleb Kemere are charged with developing a thin interface that can monitor and stimulate hundreds of thousands and perhaps millions of neurons in the cortex, the outermost layer of the brain.

"The inspiration comes from advances in semiconductor manufacturing," Robinson said. "We're able to create extremely dense processors with billions of elements on a chip for the phone in your pocket. So why not apply these advances to neural interfaces?"

Kemere said some teams participating in the multi-institution project are investigating devices with thousands of electrodes to address individual neurons. "We're taking an all-optical approach where the microscope might be able to visualize a million neurons," he said.

That requires neurons to be visible. Pieribone's Pierce Lab is gathering expertise in bioluminescence —think fireflies and glowing jellyfish—with the goal of programming neurons with proteins that release a photon when triggered. "The idea of manipulating cells to create light when there's an electrical impulse is not extremely far-fetched in the sense that we are already using fluorescence to measure electrical activity," Robinson said.

The scope under development is a cousin to Rice's FlatCam, developed by Baraniuk and Veeraraghavan to eliminate the need for bulky lenses in cameras. The new project would make FlatCam even flatter, small enough to sit between the skull and cortex without putting additional pressure on the brain, and with enough capacity to sense and deliver signals from perhaps millions of neurons to a computer.

Alongside the hardware, Rice is modifying FlatCam algorithms to handle data from the brain interface.

"The microscope we're building captures three-dimensional images, so we'll be able to see not only the surface but also to a certain depth below," Veeraraghavan said. "At the moment we don't know the limit, but we hope we can see 500 microns deep in tissue."

"That should get us to the dense layers of cortex where we think most of the computations are actually happening, where the neurons connect to each other," Kemere said.

A team at Columbia University is tackling another major challenge: The ability to wirelessly power and gather data from the interface.

In its announcement, DARPA described its goals for the implantable package. "Part of the fundamental research challenge will be developing a deep understanding of how the brain processes hearing, speech and vision simultaneously with individual neuron-level precision and at a scale sufficient to represent detailed imagery and sound," according to the agency. "The selected teams will apply insights into those biological processes to the development of strategies for interpreting neuronal activity quickly and with minimal power and computational resources."

"It's amazing," Kemere said. "Our team is working on three crazy challenges, and each one of them is pushing the boundaries. It's really exciting. This particular DARPA project is fun because they didn't just pick one science-fiction challenge: They decided to let it be DARPA-hard in multiple dimensions."

 Explore further: Scientists use algorithm to peer through opaque brains

Provided by: Rice University  

]]>
Wed, 12 Jul 2017 22:56:30 +0000
<![CDATA[Bionic Man vs Robots: Winning The Jobs Battle]]>http://2045.com/news/35169.html35169Ever since the Luddites fought back against 19th century industrialization, people have worried about robots ‘stealing’ their jobs. Time and again, the threat has proved real but transitory as new jobs eventually arose to replace those eliminated by technology.

This time could be different, especially if we use digital technology to empower people rather than replace them. It starts with job design and new attitudes about skill.

The Luddites were Right… Sort of

A recent presentation at the ECB Forum on Central Banking in Portugal confirms what economists have long said, which is that increasing labor productivity initially adds jobs within a given industry and then destroys them while creating new work in other areas. The example of the textile industry offers proof. 

The simplified story is that skilled weavers working without industrial machinery comprised a small group. Their output was tiny with most consumers owning at most a few different items of clothing. The impact of mechanization was to radically increase productivity, which meant consumers bought more clothes and employment initially rose – aka the ‘income effect’ working on elastic demand.

The jobs were dangerous, scary and mostly low skilled. They also eventually started to disappear altogether as more and more tasks were taken over by machines. From the Luddites’ perspective, this is hardly a positive outcome.

]]>
Thu, 6 Jul 2017 19:52:43 +0000
<![CDATA[Neuron-integrated nanotubes to repair nerve fibers]]>http://2045.com/news/35153.html35153Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices consisting of biological issue and synthetic material. These could re-establish connections between nerve cells at the spinal level that were lost due to lesions or trauma. This is the result of research published in the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE.

Researchers have investigated the possible effects on neurons of interactions with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms such as the growth of neurons as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights possible uses of carbon nanotubes as facilitators of neuronal regeneration or to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has already begun.

"Interface systems, or, more generally, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation," says Laura Ballerini (SISSA). "The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries." These nanomaterials are used both as scaffolds, as supportive frameworks for nerve cells, and as interfaces transmitting those signals by which nerve cells communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. "Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude," says Laura Ballerini. "If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable."

"This," Maurizio Prato adds, "is precisely what we have investigated in this study where we used pure carbon nanotubes."

The results of the research are extremely encouraging: "First of all, we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important."

The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes via this interaction develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. "Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited. We proved that after a few weeks, a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance."

Laura Ballerini says, "We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now, we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions."

 Explore further: A 'bridge' of carbon between nerve tissues

More information: Niccolò Paolo Pampaloni et al, Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces, Nanomedicine: Nanotechnology, Biology and Medicine (2017). DOI: 10.1016/j.nano.2017.01.020 

Provided by: International School of Advanced Studies (SISSA) 

]]>
Mon, 3 Jul 2017 22:23:47 +0000
<![CDATA[This Parkour Robot Easily Bounces Its Way Over Obstacles]]>http://2045.com/news/35155.html35155With a spinning tail and small thrusters, it has total control over its orientation in mid-air so that it’s always ready for the next hop.

Researchers at the University of California, Berkeley, have updated their parkour robot, and the results would make any free-runner green with envy.

Late last year, we wrote about Duncan Haldane’s Salto robot. It was impressive: weighing mere ounces and standing just a few inches tall, it crouched low, jumped high, and could quickly prepare for another jump. That meant that it could, say, bounce itself off walls.

The only problem was that the small spinning tail it used to control its aerial orientation could control it only along one axis—known as pitch, as on an airplane. That meant it could only jump forward and backward, and then only for few hops at a time, because if it went off balance along the other two axes it would fall to the left or right.

Now, though, IEEE Spectrum reports that Salto has been upgraded: say hello to Salto-1P. The addition of two small thrusters, like propellers from a quadcopter drone, allows it to adjust its orientation in the two other directions, known as roll and yaw, as it moves through the air. It can also crouch lower, enabling it to jump a little farther. (It’s worth noting that it’s not autonomous—a computer is working out how it should move and wirelessly beaming it instructions.)

You can see the impressive results of those upgrades in the clips above. Now, Salto-1P can bounce forward and backward many times over, move side to side to cover the entire floor of a room, and even traverse obstacles like foam blocks and a ramp.  

(Read more: IEEE Spectrum, “This Super-Springy Robot Can Do Parkour”)

]]>
Thu, 29 Jun 2017 22:34:23 +0000
<![CDATA[hitchBOT creators to study how AI and robots can help patients]]>http://2045.com/news/35154.html35154McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

With the help of Softbank's humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster's Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients' behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the Science and Technology Museum in Ottawa.

"Pepper will help us highlight some very important aspects and motives of human behaviour and communication," said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, 'read' emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: "We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients' communications needs."

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD's Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

"This partnership is a testament to the collaborative nature of innovation," said dean of FCAD, Charles Falzon. "I'm thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom."

"This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care," says McMaster's Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

"We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations," said Harris Smith.

 Explore further: Hitchhiking robot travels across Canada (Update)

Provided by: McMaster University  

]]>
Fri, 23 Jun 2017 22:26:18 +0000
<![CDATA[NASA releases footage of robot 'Valkyrie']]>http://2045.com/news/35150.html35150Scientists from the United States space agency NASA teamed up with the Johnson Space Center to test the agency's new robot Valkyrie, an android that has a head, two arms and two legs. The robot is expected to be sent to Mars in the future.

A humanoid robot known as Valkyrie that could one day walk on Mars has been showing off its skills in a new video.

Named after the female war spirits of Norse mythology, Valkyrie walks on two legs and has jointed arms and hands that can grasp objects.

Designed and built by NASA's Johnson Space Center, Valkyrie will walk on Mars before the first human explorers, who are expected to reach the Red Planet in the mid-2030s .

The humanoid design was chosen to make it easier for Valkyrie to work alongside people so that, for instance, no special ramps have to be provided to accommodate wheels.

In the video, Valkyrie is shown walking over a series of stepping stones in a curved, uneven path, without stumbling once.

All the decisions about where Valkyrie will place its foot next and how to counterbalance its weight are made autonomously, thanks to a control algorithm developed by IHMC Robotics, which acts as the robot's brain.

This algorithm gathers data about the environment using a spinning laser radar or "Lidar" system housed in its face - similar to those used in driverless cars.

The instrument measures the distance to objects by firing pulses of light at surfaces and timing how long it takes the reflected "echoes" to bounce back.

It then processes this data to identify flat "planar regions" that are suitable for stepping on, before plotting out footsteps to reach a specified location.

Maintaining balance is one of the biggest hurdles to be crossed when designing a walking humanoid robot, according to IHMC Robotics.

Valkyrie overcomes this problem by rapidly computing in real time how to alter its centre of mass position to stay upright.

The robot has no "ears" and cannot speak, but it is equipped with a pair of stereoscopic camera "eyes", cameras on its belly, and an intricate set of force sensors to help it react to touch and pressure.

The robot has a total of 34 "degrees of freedom" - essentially, modes in which it can move - but it is expected acquire more dexterous capabilities over the next few years.

]]>
Mon, 19 Jun 2017 23:28:00 +0000
<![CDATA[The bionic skin to help robots feel]]>http://2045.com/news/35152.html35152Meet the team behind the 3D-printed stretchable sensors equipping machines with a sense of touch.

Robots can’t feel. Or can they? Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing a stretchable electronic fabric and it’s allowing robots to experience tactile sensation. We reached out to University of Minnesota mechanical engineering associate professor and lead researcher on the study, Michael McAlpine, to find out how the super sensors work.

McAlpine is no stranger to Red Bull or 3D printing. He first achieved international acclaim for integrating electronics and 3D-printed nanomaterials to create a ‘bionic ear’ designed to hear radio frequencies beyond human capability, and featured in our 20 Mightiest Minds on Earth edition of The Red Bulletin way back in 2012. Now he’s tackling a new sense, touch, and his bionic skin may just save lives.

“Putting this type of ‘bionic skin’ on surgical robots would give surgeons the ability to actually feel during minimally invasive surgeries, which would make surgery easier and more precise instead of just using cameras like they do now. These sensors could also make it easier for other robots to walk and interact with their environment,” McAlpine says.

In a further melding of man and machine, future sensors could be printed directly onto human skin for purposes of health monitoring or to protect soldiers in the field from dangerous chemicals or explosives – the ultimate in wearable tech.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine says. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

The applications are pretty impressive, but how exactly does it work? Well, as you might imagine, it’s not your standard 3D printer.

New technology could print directly on human skin© Shuang-Zhuang Guo and Michael McAlpine, University of Minnesota

Conventional 3D printing using liquid plastic is too rigid and hot to print on skin, so McAlpine and his team print their unique sensing material using a one-of-a-kind printer they built in the lab. The multifunctional printer has four nozzles to print the various specialised 'inks' that make up the layers of the device – a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process.

All the 'inks' used in this process can set at room temperature and the 3D-printed sensors can stretch up to three times their original size.

“With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use,” McAlpine explains. “The nice thing about this 3D-printing tool is that the manufacturing is built right into the tool, so this is reality now. We’re starting to integrate these devices directly onto the human body now, and it’s going to completely revolutionise the way people think about 3D printing.”

]]>
Sun, 18 Jun 2017 23:40:49 +0000
<![CDATA[If we want bionic limbs that actually work, we might need smarter amputations]]>http://2045.com/news/35151.html35151Prosthetic limbs are advancing in leaps and bounds. They’re becoming computerizedbrain-controlled, and sensational. But as futuristic as these bionic limbs are, users often prefer simpler devices because the fancy ones are hard to control and they don’t provide enough feedback.

If you flex your wrist, even if your eyes are closed, you can feel where your wrist is and how fast you’re flexing it. And if you’re holding a barbell, you can feel how heavy it is. Someone with an artificial wrist can’t feel any of that—instead, she has to constantly keep an eye on her prosthetic to see what it’s doing.

“Those sensations are what we intend to provide back to people with limb amputation,” says Hugh Herr, who creates prosthetic limbs at MIT and wears two bionic legs himself.

Herr and his colleagues argue that part of the reason advanced prosthetics aren’t taking off is because amputation essentially hasn’t changed since the Civil War. In a new paper in Science Robotics, they’ve tested a new amputation procedure that may provide better control of advanced prostheses, as well as sensory feedback.

Typical amputations slice right through a patient’s nerves and muscles, leaving some extra muscle to tuck around the end of the limb for cushioning. Without any organs to stimulate, the severed nerves swell painfully. In addition, the arrangement weakens the electrical signals from the muscle, making it difficult to control some bionic limbs that take their orders from the body’s electrical circuitry.

Normally, muscles come in pairs that do opposite things. When you flex your biceps, for example, your triceps stretch. That stretching tricep automatically sends a signal back to your brain, telling you what’s happening in your arm. Amputation typically breaks up these muscle pairings, but Herr thinks that recreating them could make controlling a bionic limb feel more natural, and could give users a sense of their bionic limb’s position and movements without having to look at it. (That sense is called proprioception.)

Muscles normally come in pairs. When one muscle in the pair contracts, the other stretches and sends a signal back to the brain. Researchers think they might be able to use these natural pairings to help amputees "feel" what their artificial limb is doing.

To test out this idea, Herr and his team created artificial muscle pairings in seven rats. Taking two muscles whose nerves had been removed, they linked them together and grafted them into the rats’ legs. Then they took two nerves that normally flex and extend leg muscles, and attached one to each muscle graft. Later, when they stimulated one of the muscles to make it contract, measurements showed that the second muscle automatically sent out a signal to the brain as it stretched. The experiment showed that these artificial muscle pairings work similarly to the biological pairings. Plus, the muscles and nerves provided a strong enough electrical signal that it could potentially be used to control a prosthetic device.

To Herr, these results mean that the artificial muscle pairings might allow information to flow to and from a prosthetic limb. Electrical signals from the contracting muscle could tell the bionic limb what to do, while the stretching muscle tells the brain how the limb is moving, creating a sense of position. Electrical stimulation from the bionic limb to the muscle could provide additional feedback about where the limb is and what it’s feeling. That way, the arm can tell you if someone is shaking your artificial hand or how heavy a barbell in your grip is.

Each muscle pairing can only control one type of motion—for example, moving your forearm up and down for a handshake. Other, independent muscle pairings would be needed to flex each finger, or adjust your wrist.

Study author Hugh Herr hopes to be one of the first humans to try out the new procedure. So far it's only been tested in rats.

Some people with amputations may still have some of these natural muscle pairings in their residual limb. For others, the pairings could be reconstructed by taking muscles from other parts of the body and grafting them to the prosthetic attachment site, like Herr’s team did in this study. And for amputations that are planned in advance, the limb that’s being amputated can be an excellent source of muscles and nerves to help recreate the muscle pairings.

“In the past, the limb was amputated and cremated, and all those great tissues were thrown away,” says Herr. “Even in my case—both my legs are amputated below the knee, and my amputations were done 30-some years ago in a really silly way, in a conventional way—in principle we can do a revision on my limbs and get muscles from another part of my body and create these pairs.”

And in fact, that’s exactly what he plans to do. “We want to rapidly translate this to humans, and I personally want this done on my limbs,” says Herr. Currently he’s having his limbs imaged, developing a surgical plan, and waiting for approval from an ethical review board, but he thinks he could undergo the surgery “very soon.”

“In the past, the limb was amputated and cremated, and all those great tissues were thrown away.”

The procedure is considered low risk since it just involves rearranging tissues. If it doesn’t work, the results should be similar to a conventional amputation.

Another advantage, says Herr, is that the technique provides feedback to the user’s nerves via the muscles. “Muscles don’t mind getting touched by synthetic things, but nerves really complain. It doesn’t like it at all, and ends up rejecting it. Muscles are a lot less touchy.” The FDA has already approved other electrical devices that interface with muscles, so the team will face less of a hurdle there.

If it works, the amputation technique may provide more precise control and sensory feedback, which in turn can lead to better reflexes and a better user experience.

They still need to test it in humans, but the team is hopeful that their technique will help make bionic limbs feel and behave more like natural limbs.

Other researchers, who are putting wires into peoples’ nerves have to figure out what electrical patterns can recreate a sense of force, touch, position, speed. By contrast, says Herr, “we’re using the body’s natural sensors to create these sensations. We’re confident because of that, it’ll feel like position, it’ll feel like speed, it’ll feel like force.”

]]>
Sat, 17 Jun 2017 23:36:49 +0000
<![CDATA[This wriggling worm-bot could be used for colonoscopies one day]]>http://2045.com/news/35149.html35149Nobody needs to reinvent the wheel, but reinventing the colonoscope is definitely worth somebody’s time. Mark Rentschler, an associate professor at the University of Colorado Boulder, is one of those people. He and his team have been working on the wormy robot, above, as a replacement for the usual flexible-camera-tube colonoscope.

“YOU’RE BASICALLY PUSHING A ROPE THROUGH A DEFORMABLE TUBE.”

“Don’t get me wrong, the traditional methods work very well, but they’re not pleasant for the patient,” Rentschler tells The Verge. “You’re basically pushing a rope through a deformable tube and the rope only bends when you get enough force against a wall. That’s where a lot of the discomfort comes from.”

Removing that discomfort is about more than just patient happiness. If colon cancer is caught early, “you’re almost guaranteed survival,” says Rentschler. The problem is that people are so unnerved by the idea of a colonoscopy that they just don’t get checked.

To overcome this problem, scientists are working on a number of different colonoscope designs, all of which have a degree of autonomy. Some have caterpillar treads, some have wheels, but Rentschler and his team thought the best approach would be to mimic natural movements inside the body. That’s why they settled on peristalsis as their chosen form of locomotion. This is the constriction and relaxation of muscles, and is used to move food along the bowels. So why not use it to move robots, too?

Peristalsis in Rentschler’s bot is simulated using springs made from a shape-memory alloy — a material that “remembers” its shape, and returns to it when heated. The metal is heated by a small electric current and expands outward. Then, a combination of cooling air and a 3D-printed silicon sheath covering the exterior of the bots acts as a natural “restoring force” to push them back in. “With this we can drive along and steer about,” says Rentschler. “Then we just put a camera on the end.”

The new bot was shown off earlier this month at the 2017 IEEE International Conference on Robotics and Automation or ICRA. It’s still in the prototype stage, though, and a number of improvements will need to be made if it ever makes it into hospitals (and bodies).

“We definitely want to get a little bit smaller in diameter,” says Rentschler. “And then the other big challenge is speed.” Right now, the bot can squirm along at a rate of around six inches in 15 seconds. An average colonoscopy takes about 30 minutes, and Rentschler’s aim is to get this down to the 20-minute mark. “We're close, but we do want to increase our speed,” he says.

And with a better colonoscope, lives can be saved. Not bad for a wriggly, squiggly robot.

]]>
Thu, 15 Jun 2017 23:20:15 +0000
<![CDATA[Meet the Most Nimble-Fingered Robot Yet]]>http://2045.com/news/35144.html35144A dexterous multi-fingered robot practiced using virtual objects in a simulated world, showing how machine learning and the cloud could revolutionize manual work.

Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is that the robot got so good at grasping by working with virtual objects.

The robot learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

The bot is significantly better than anything developed previously. In tests, when it was more than 50 percent confident it could grasp an object, it succeeded in lifting the item and shaking it without dropping the object 98 percent of the time. When the robot was unsure, it would poke the object in order to figure out a better grasp. After doing that it was successful at lifting it 99 percent of the time. This is a significant step up from previous methods, the researchers say.

The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes (see “10 Breakthrough Technologies 2017: Robots That Teach Each Other”). It is described in a paper to be published at a major robotics conference held this July.

Many researchers are working on ways for robots to learn to grasp and manipulate things by practicing over and over, but the process is very time-consuming. The new robot learns without needing to practice, and it is significantly better than any previous system. “We’re producing better results but without that kind of experimentation,” says Ken Goldberg, a professor at UC Berkeley who led the work. “We’re very excited about this.”

Instead of practicing in the real world, the robot learned by feeding on a data set of more than a thousand objects that includes their 3-D shape, visual appearance, and the physics of grasping them. This data set was used to train the robot’s deep-learning system. “We can generate sufficient training data for deep neural networks in a day or so instead of running months of physical trials on a real robot,” says Jeff Mahler, a postdoctoral researcher who worked on the project.

Goldberg and colleagues plan to release the data set they created. Public data sets have been important for advancing the state of the art in computer vision, and now new 3-D data sets promise to help robots advance.

Stefanie Tellex, an assistant professor at Brown University who specializes in robot learning, describes the research as “a big deal,” noting that it could accelerate laborious machine-learning approaches.

“It's hard to collect large data sets of robotic data,” Tellex says. “This paper is exciting because it shows that a simulated data set can be used to train a model for grasping.  And this model translates to real successes on a physical robot.”

Advances in control algorithms and machine-learning approaches, together with new hardware, are steadily building a foundation on which a new generation of robots will operate. These systems will be able to perform a much wider range of everyday tasks. More nimble-fingered machines are, in fact, already taking on manual labor that has long remained out of reach (see “A Robot with Its Head in the Cloud Tackles Warehouse Picking”).

Russ Tedrake, an MIT professor who works on robots, says a number of research groups are making progress on much more capable dexterous robots. He adds that the UC Berkeley work is impressive because it combines newer machine-learning methods with more traditional approaches that involve reasoning over the shape of an object.

The emergence of more dexterous robots could have significant economic implications, too. The robots found in factories today are remarkably precise and determined, but incredibly clumsy when faced with an unfamiliar object. A number of companies, including Amazon, are using robots in warehouses, but so far only for moving products around, and not for picking objects for orders.

The UC Berkeley researchers collaborated with Juan Aparicio, a research group head at Siemens. The German company is interested in commercializing cloud robotics, among other connected manufacturing technologies.

Aparicio says the research is exciting because the reliability of the arm offers a clear path toward commercialization.

Developments in machine dexterity may also be significant for the advancement of artificial intelligence. Manual dexterity played a critical role in the evolution of human intelligence, forming a virtuous feedback loop with sharper vision and increasing brain power. The ability to manipulate real objects more effectively seems certain to play a role in the evolution of artificial intelligence, too.

]]>
Sat, 3 Jun 2017 01:33:51 +0000
<![CDATA[I Spy With My DragonflEye: Scientists 'Hack' Insect to Create Cyborg Drone]]>http://2045.com/news/35145.html35145Many might think of a cyborg as something out of a science-fiction movie script, but scientists have found a way to alter a living dragonfly so they can control its movements.

As countries like the United States continue to rely on surveillance drones, the challenge of shrinking the flying robots down to an inconspicuous size has become a point of interest for military researchers.

Scientists at Charles Stark Draper Laboratory in the US, have developed a way of using living insects as drones.

The research has been named DragonflEye, and is essentially a cyborg dragon fly, meaning it is half dragonfly, half machine.

It was created by genetically modifying regular dragonflies with "steering neurons" in the spinal cord of the insect. Through doing this tiny, fiber-optic-like structures in the eyes of the dragonfly send bursts of light to the brain, which then allows scientists to control where the insect flies via remote control.

On the dragonfly's back, is a tiny device that appears as a backpack, which contains sensors and a solar panel to power the data collection technology.

The hope is that dragonfly will then be able to be steered by the researchers and collect data through its sensors, environments that are either not safe for humans, or are to small for humans to fit through, such as cracks in walls.

Some champion this as a huge breakthrough for technology, while others might feel slightly uncomfortable with the thought of genetic modification being used to control insects, or perhaps one day, even higher-up species.

However, the cyborg insect could also be very advantageous to the way that we understand the world, and perhaps even one day to humans.

Some have suggested that such technology could be used to help humans who are paralyzed to restore movement. 

]]>
Fri, 2 Jun 2017 01:38:23 +0000
<![CDATA[MIT teaches machines to learn from each other]]>http://2045.com/news/35141.html35141There are two typical ways to train a robot today: you can have it watch repeated demonstrations of what you want it to do or you can program its movements directly using motion-planning techniques. But a team of researchers from MIT's CSAIL lab have developed a hybridized third option that will enable robots to transfer skills and knowledge between themselves. It's no Skynet, but it's a start.

The system, dubbed C-LEARN, is designed to enable anybody, regardless of their programming know-how, to program robots for a wide range of tasks. But rather than having the robot ape your movements or hand-coding its desired movements, C-LEARN only requires that the user input a birt of information on how the objects the robot will interact with are typically handled then run through a single demonstration. The robot can then share this kinematic data with others of its kind.

First, the user inputs the environmental constraints -- essentially how to reach out, grasp and hold the items it's interacting with. That way the robot isn't not crushing everything it touches or holding objects in a way that will cause them to break or fall. Then, using a CAD program, the user can create a single digital demonstration for the robot. It actually works a lot like traditional hand-drawn animations wherein the robot's motions hit specific movements and positions as "keyframes" and fills in the rest.

Of course, the robot doesn't have the final say in this, all motion plans have to be verified by the human operator first. Overall, the robots were able to choose the optimal motion plan 87.5 percent of the time without human intervention, though that number jumped to 100 percent when a human operator was able to tweak the plan as needed.

The first robot to benefit from this new system is the Optimus, a two-armed bomb disposal-bot. The CSAIL team taught it to open doors, carry items and even pull objects out of jars. The Optimus was then able to transfer these same skills to another robot in the CSAIL lab, the 6-foot, 400-pound Atlas.

]]>
Fri, 12 May 2017 12:47:27 +0000
<![CDATA[Freaky Ostrich-like running robot built for ‘planetary exploration’ (VIDEOS)]]>http://2045.com/news/35135.html35135It may look like an ostrich cantering over the ground, but the Planar Elliptical Runner could become the model for a human-sized running robot – and even aid “planetary exploration.”

Developed by the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida, the machine’s fluid locomotion has drawn comparisons with the flightless bird.

Speaking to Digital Trends, research associate Johnny Godowski said: “It’s emulating what you see in nature. Birds are able to run over holes and obstacles half their leg height, and they don’t even break stride. Our robot mechanism is designed to do the same thing.”

Unlike other two-legged robots, it does not use computer sensors to balance itself. Instead a single motor drives the machine’s two legs while a side-to-side motion keeps it upright. The robot is also guided by a standard radio controller, meaning it does not waste battery power.

The robot can reach speeds of up to 10mph (16kph) – but researchers believe a human-sized machine could one day hit speeds of up to three times that of its smaller counterpart.

Jerry Pratt, a senior research scientist at IMHC, told Technology Review: “We believe that the lessons learned from this robot can be applied to more practical running robots to make them more efficient and natural looking.

“Robots with legs will be particularly useful in places where you want a human presence, but it’s too dangerous, expensive, or remote to send a real human. Examples include nuclear power plant decommissioning and planetary exploration.”

In 2013, Pratt led a team to second place in the DARPA Robotics Challenge, a US Defense Department contest testing robots’ abilities to perform a series of tasks in extreme environments.  

Other robotics firms are hoping to make breakthroughs with their own two and four-legged machines.

In February, Agility Robotics unveiled Cassie, another ostrich-inspired bipedal creation, while Honda continues to market their humanoid robot ASIMO.

Meanwhile, Pratt’s team is working on a number of biped projects. IMHC showcased these advances at their annual Robotics Open House in Florida last month.

]]>
Sat, 6 May 2017 00:36:17 +0000
<![CDATA[Bionic hand that can see for itself makes things easy to grasp]]>http://2045.com/news/35136.html35136An artificial hand is using artificial intelligence to see with an artificial eye. The new prosthetic can choose how best to grab objects placed in front of it automatically, making it easier to use.

When it sees an object, the artificial hand detects the intention to grasp by interpreting electrical signals from muscles in the wearer’s arm. It then takes a picture of the object using a cheap webcam and picks one of four possible grasping positions.

The different grips include one similar to picking up a cup, one similar to picking up a TV remote from a table, one that uses two fingers and a thumb, and another that uses just the thumb and index finger. “The hand learns the best way to grasp objects – that’s the beauty of it,” says Ghazal Ghazaei at Newcastle University, UK.

To train the hand, Ghazaei and her colleagues showed it images of more than 500 objects. Each object came with 72 different images, showing different angles and different backgrounds, as well as the best grip for picking it up. Through trial and error, the system learned to choose the best grips for itself.

Not quite there

Existing controllable prosthetics work by converting electrical signals in a person’s arm or leg into movement. But it can take a long time to learn to control an artificial limb and the movements can still be clumsy. The new system is just a prototype, but by giving a hand the ability to see what it is doing and position itself accordingly, the team believe they can make a better prosthetic.

The design has been tested by two people who have had a hand amputated. They were able to grab a range of objects with just under 90 per cent accuracy. That’s not bad for a prototype but dropping one out of 10 things users try to pick up is not yet good enough.

“We’re aiming for 100 per cent accuracy,” says Ghazaei. The researchers hope to achieve this by trying out different algorithms. They also plan to make a lighter version with the camera embedded in the palm of the hand.

The key with prostheses like these is getting the balance right between user and computer control, says Dario Farina at Imperial College London. “People don’t want to feel like a robot, they want to feel like they are fully in control,” he says.

It’s important that the technology helps assist grasping rather than fully taking over. “It should be similar to brake assistance on a car, the driver decides when to brake but the car helps them brake better,” says Farina.

]]>
Wed, 3 May 2017 00:42:47 +0000
<![CDATA[What humans will look like in 1,000 years]]>http://2045.com/news/35137.html35137Humans are still evolving, So, where will evolution take us in 1,000 years?
Chances are we’ll be taller. Humans have already seen a boom in height over the last 130 years.

In 1880 the average American male was 5’7’’. Today, he’s 5’10’’.

We may also merge with machines that can enhance our hearing, eyesight, health, and much more. Right now, there are hearing aids that let you record sounds, generate white noise, and even come with a built-in phone.

Another example is a team out of the University of Oregon which is developing bionic eyes that help the blind to see. But it’s not impossible to imagine that this technology could become a tool for seeing what we currently consider invisible, like different energies of light such as infrared and x-rays.

There will eventually be a day where prosthetics are no longer just for the disabled.

However, it’s not just our outside appearance that will change – our genes will also evolve on microscopic levels to aid our survival. For example, an Oxford-led study discovered a group of HIV-infected children in South Africa living healthy lives. It turns out, they have a built-in defense against HIV that prevents the virus from advancing to AIDS.

And with gene-editing tools like CRISPR, we may eventually control our genes and DNA to the point where we make ourselves immune to disease and even reverse the effects of aging.

Another way to jump-start the human evolution on a different path is to move some of us to Mars. Mars receives 66% less sunlight than Earth. Which could mean humans on Mars will evolve larger pupils that can absorb more light in order to see. And since Mars’ gravitational pull is only 38% of Earth’s, people born on Mars might actually be taller than anyone on Earth. In space, the fluid that separates our vertebrae expands, which led American aerospace engineer, Robert Zubrin to suggest that Mars’ low gravity could allow the human spine to elongate enough to add a few extra inches to our height.

However, not even a move to Mars could spark the biggest change in human evolution that we may have coming in the next 1,000 years: immortality. The path to immortality will likely require humans to download their consciousness into a machine. Right now, scientists in Italy and China are performing head transplants on animals to determine if you can transfer consciousness from one body to another. They claim their next big step is to transplant human heads.

Whatever happens in the next 1,000 years — whether we merge with machines or become them — one thing is certain: The human race is always changing — and the faster we change and branch out from Earth, the better chance we have of outrunning extinction.

]]>
Sat, 29 Apr 2017 00:49:35 +0000
<![CDATA[NASA Gives Rover An Origami-Inspired Robot Scout]]>http://2045.com/news/35127.html35127NASA has started testing an origami-inspired scout robot that will be used to explore the Martian surface.

Mars exploration missions have gained traction in the last few years, and space agencies are developing new rovers and robots that can enable scientists to garner more details of the Red Planet. 

PUFFER: A New Robot Scout

Pop-Up Flat Folding Explorer Robot or PUFFER has been developed by NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California.

This device was introduced by Jaakko Karras,who is the project manager of PUFFER at JPL, while he was testing the origami designs. Karras and his associates thought of using a printed circuit board while creating these devices.

PUFFER includes a lightweight structure and it is made in such a way that it can tuck its wheels, flatten itself, and explore places, which a typical rover cannot access.

Features Of PUFFER

The scout robot has been tested under varied rugged conditions, starting from the Mojave Desert in California to the frozen plains of Antarctica. It was put through these tests to ensure its functionality in all kinds of terrain, whether sand covered or snow laden.

Originally, this device consisted of four wheels, but at present it only has two wheels that are foldable. The folding of the wheel over the body allows the machine to roll and crawl.

It also consists of a tail which is made for stability. The robot inlcudes a "microimager," which is a high resolution camera, and solar panels are placed on the belly of the PUFFER. The machine flips over when the batteries are drained.

PUFFER can climb up to 45 degree slopes and can even fall into craters and pits unharmed. The robot is considered a strong assistant to large robotic devices that will be sent to Mars in the near term.

"They can do parallel science with a rover, so you can increase the amount you're doing in a day. We can see these being used in hard-to-reach locations - squeezing under ledges, for example," stated Karras.

Another member of the PUFFER group, Christine Fullera at JPL, said that the body and the electronics of PUFFER involve a circuit board. There are no escalating fasteners or any other parts, which are attached to it. The robot has an integrated body.

The team has built a sample of PUFFER and has already started testing it for the past few months. The officials of the PUFFER project have said that this device is not yet ready. They plan to give the robot more autonomy by including scientific instruments like gear, which identifies carbon containing organic molecules.

]]>
Wed, 15 Mar 2017 09:22:27 +0000
<![CDATA[Brain activity appears to continue after people are dead, according to new study]]>http://2045.com/news/35120.html35120Brain activity may continue for more than 10 minutes after the body appears to have died, according to a new study.

Canadian doctors in an intensive care unit appear to have observed a person's brain continuing to work even after they were declared clinically dead.

In the case, doctors confirmed their patient was dead through a range of the normal observations, including the absence of a pulse and unreactive pupils. But tests showed that the patients’ brain appeared to keep working – experiencing the same kind of brain waves that are seen during deep sleep.

In a study that noted the findings could lead to new medical and ethical challenges, doctors reported that they had seen “single delta wave bursts persisted following the cessation of both the cardiac rhythm and arterial blood pressure (ABP)”. The findings are reported in a new study published by a team from the University of Western Ontario.

Only one of the four people studied exhibited the long-lasting and mysterious brain activity, with activity in most patients dying off before their heart stopped beating. But all of their brains behaved different in the minutes after they died – adding further mystery to what happens to them after death.

The doctors don’t know what the purpose of the activity might be, and caution against drawing too many conclusions from such a small sample. But they write that it is difficult to think the activity was the result of a mistake, given that all of the equipment appeared to be working fine.

Researchers had previously thought that almost all brain activity ended in one huge mysterious surge about a minute after death. But those studies were based on rats – and the research found no comparable effect in humans.

“We did not observe a delta wave within 1 minute following cardiac arrest in any of our four patients,” they write in the new study.

What happens to the body and mind after death remains almost entirely mysterious to scientists. Two other studies last year, for instance, demonstrated that genes appeared to continue functioning – and even function more energetically – in the days after people die.

]]>
Fri, 10 Mar 2017 17:51:25 +0000
<![CDATA[Researchers Take A Step Toward Mind-Controlled Robots]]>http://2045.com/news/35121.html35121What if your friend the robot could tell what you're thinking, without you saying a word?

Researchers at MIT's Computer Science and Artificial Intelligence Lab and Boston University have created a system where humans can guide robots with their brainwaves. This may sound like a theory out of a sci-fi novel, but the goal of seamless human-robot interaction is the next major frontier for robotic research.

For now, the MIT system can only handle simple binary activities such as correcting a robot as it sorts objects into two boxes, but CSAIL Director Daniela Rus sees a future where one day we could control robots in more natural ways, rather than having to program them for specific tasks — like allowing a supervisor on a factory floor to control a robot without ever pushing a button.

"Imagine you look at the robots, and at some point one robot is not doing the job correctly," Rus explained. "You will think that, you will have that thought, and through this detection you would in fact communicate remotely with the robot to say 'stop.' "

Rus admits the MIT development is a baby step, but she says it's an important step toward improving the way humans and robots interact.

Currently, most communication with robots requires thinking in a particular way that computers can recognize or vocalizing a command, which can be exhausting.

"We would like to change the paradigm," Rus said. "We would like to get the robot to adapt to the human language."

The MIT paper shows it's possible to have a robot read your mind — at least when it comes to a super simplistic task. And Andres Salazar-Gomez, a Boston University Ph.D. candidate working with the CSAIL research team, says this system could one day help people who can't communicate verbally.

Meet Baxter

For this study, MIT researchers used a robot named Baxter from Rethink Robotics.

Baxter had a simple task: Put a can of spray paint into the box marked "paint" and a spool of wire in the box labeled "wire." A volunteer hooked up to an EEG cap, which reads electrical activity in the brain, sat across from Baxter, and observed him doing his job. If they noticed a mistake, they would naturally emit a brain signal known as an "error-related potential."

"You can use [that signal] to tell a robot to stop or you can use that to alter the action of the robot," Rus explained.

The system then translates that brain signal to Baxter, so he understands he's wrong, his cheeks blush to show he's embarrassed, and he corrects his behavior.

The MIT system correctly identified the volunteer's brain signal and then corrected the robot's behavior 70 percent of the time.

Making robots effective "collaborators"

"I think this is exciting work," said Bin He, a biomedical engineer at the University of Minnesota, who published a paper in December that showed people can control a robotic arm with their minds.

He was not affiliated with the MIT research, but he sees this as a "clever" application in a growing yet nascent field.

Researchers say there's an increasing desire to find ways to make robots effective "collaborators," not just obedient servants.

"One key aspect of collaboration is being able ... to know when you're making a mistake," said Siddhartha Srinivasa, a professor at Carnegie Mellon University who was not affiliated with the MIT study. "What this paper shows is how you can use human intuition to boot-strap a robot's learning of what its world looks like and how it can know right from wrong."

Srinivasa says this research could potentially have key implications for prosthetics, but cautions it's an "excellent first step toward solving a harder, much more complicated problem."

"There's a long gray line between not making a mistake and making a mistake," Srinivasa said. "Being able to decode more of the neuronal activity... is really critical."

And Srinivasa says that's a topic that more scientists need to explore.

Potential real-world applications

MIT's Rus imagines a future where anybody can communicate with a robot without any training — a world where this technology could help steer a self-driving car or clean up your home.

"Imagine ... you have your robot pick up all the toys and socks from the floor, and you want the robot to put the socks in the sock bin and put the toys in the toy bin," she said.

She says that would save her a lot of time, but for now the mechanical house cleaner that can read your mind is still a dream.

]]>
Wed, 8 Mar 2017 17:54:13 +0000
<![CDATA[Ghost Minitaur™ Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors]]>http://2045.com/news/35119.html35119Ghost Robotics, a leader in fast and lightweight direct-drive (gearless) legged robots, announced today that its patent-pending Ghost Minitaur™ has been updated with advanced reactive behaviors for navigating grass, rock, sand, snow and ice fields, urban objects and debris, and vertical terrain. (https://youtu.be/bnKOeMoibLg)

The latest gaits adapt reactively to unstructured and dynamic environments to maintain balance, ascend steep inclines (up to 35º), handle curb-sized steps in stride (up to 15cm), crouch to fit under crawl spaces (as low as 27cm), and operate at variable speeds and turning rates. Minitaur's high-force capabilities enable it to leap onto ledges (up to 40cm) and across gaps (up to 80cm). Its high control bandwidth allows it to actively balance on two legs, and high speed operation allows its legs to manipulate the world faster than the blink of an eye, while deftly reacting to unexpected contact.

Continue Reading
Ghost Minitaur(TM) Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors.

"Our primary focus since releasing the Minitaur late last year has been expanding its behaviors to traverse a wide range of terrains and real-world operating scenarios," said Gavin Kenneally, and Avik De, Co-founders of Ghost Robotics. "In a short time, we have shown that legged robots not only have superior baseline mobility over wheels and tracks in a variety of environments and terrains, but also exhibit a diverse set of behaviors that allow them to easily overcome natural obstacles. We are excited to push the envelope with future capabilities, improved hardware, as well as integrated sensing and autonomy."

Ghost Robotics is designing next-generation legged robots that are superior to wheeled and tracked autonomous vehicles in real-world field applications, while substantially reducing costs to drive adoption and scalable deployments. Its direct-drive technology creates the lowest cost model with durability for commercializing very small to medium size legged UGV sensor platforms over any competitive design. The company's underlying research and intellectual property have additional applications in ultra-precise manipulators that are human-safe, and advanced gait research.

While a commercial version of the Ghost Minitaur™ robot is slated for delivery in the future, the current development platform is in high demand and has been shipped to many top robotics researchers worldwide because of its design simplicity, low cost and flexible software development environment for a broad range of research and commercialization initiatives.

"We are pleased with our R&D progress towards commercializing the Ghost Minitaur™ to prove legged robots can surpass the performance of wheel and track UGVs, while keeping the cost model low to support volume adoption, which is certainly not the case with existing bipedal and quadrupedal robot vendors," said Jiren Parikh, Ghost Robotics, CEO.

In the coming quarters, the company plans to demonstrate further improvements in mobility, built-in manipulation capabilities to interact with objects in the world, integration with more sensors, built-in autonomy for operation with reduced human intervention, as well as increased mechanical robustness and durability for operation in harsh environments.

About Ghost Robotics

Robots that Feel the World™. Ghost Robotics develops patent-pending, ultrafast and highly responsive direct-drive (no gearbox) legged robots for instantaneous and precise force feedback applications, offering superior operability over wheeled and tracked robots. The lightweight and low-cost Ghost Minitaur™ robot platform can be used as an autonomous vehicle fitted with sensors for ISR, search and rescue, asset management and inspection, exploration, scientific and military applications where unknown, rough, varied, hazardous, environmentally sensitive and even vertical terrain is present. Ghost Robotics is privately held and backed by the University of Pennsylvania and PCI Ventures with offices in Philadelphia. www.ghostrobotics.io

SOURCE Ghost Robotics, LLC

Related Links

http://ghostrobotics.io

]]>
Wed, 1 Mar 2017 17:55:29 +0000
<![CDATA[Boston Dynamics’ newest robot: Introducing Handle]]>http://2045.com/news/35118.html35118Handle is a research robot standing 6.5 ft tall, travels at 9 mph and jumps 4 feet vertically. It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. Handle uses many of the same dynamics, balance and mobile manipulation principles found in the other quadruped and biped robots Boston Dynamics’ build, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs Handle can have the best of both worlds.

]]>
Tue, 28 Feb 2017 21:35:55 +0000
<![CDATA[The 'Curious' Robots Searching for the Ocean's Secrets]]>http://2045.com/news/35116.html35116People have been exploring the Earth since ancient times—traversing deserts, climbing mountains, and trekking through forests. But there is one ecological realm that hasn’t yet been well explored: the oceans. To date, just 5 percent of Earth’s oceans have been seen by human eyes or by human-controlled robots.

That’s quickly changing thanks to advancements in robotic technologies. In particular, a new class of self-controlled robots that continually adapt to their surroundings is opening the door to undersea discovery.  These autonomous, “curious” machines can efficiently search for specific undersea features such as marine organisms and landscapes, but they are also programmed to keep an eye out for other interesting things that may unexpectedly pop up.

Curious robots—which can be virtually any size or shape—use sensors and cameras to guide their movements. The sensors take sonar, depth, temperature, salinity, and other readings, while the cameras constantly send pictures of what they’re seeing in compressed, low-resolution form to human operators. If an image shows something different than the feature a robot was programmed to explore, the operator can give the robot the okay to go over and check out in greater detail.

The field of autonomous underwater robots is relatively young, but the curious-robots exploration method has already lead to some pretty interesting discoveries, says Hanumant Singh, an ocean physicist and engineer at Woods Hole Oceanographic Institution in Massachusetts. In 2015, he and a team of researchers went on an expedition to study creatures living on Hannibal Seamount, an undersea mountain chain off Panama’s coast. They sent a curious robot down to the seabed from their “manned submersible”—a modern version of the classic Jacques Cousteau yellow submarine—to take photos and videos and collect living organisms on several dives over the course of 21 days.

On the expedition’s final dive, the robot detected an anomaly on the seafloor, and sent back several low-resolution photos of what looked like red fuzz in a very low oxygen zone. “The robot’s operators thought what was in the image might be interesting, so they sent it over to the feature to take more photos,” says Singh. “Thanks to the curious robot, we were able to tell that these were crabs—a whole swarming herd of them.”

The team used submarines to scoop up several live crabs, which were later identified through DNA sequencing as Pleuroncodes planipes, commonly known as pelagic red crabs, a species native to Baja California. Singh says it was extremely unusual to find the crabs so far south of their normal range and in such a high abundance, gathered together like a swarm of insects. Because the crabs serve as an important food source for open-ocean predators in the eastern Pacific, the researchers hypothesize the crabs may be an undetected food source for predators at the Hannibal Seamount, too.

When autonomous robot technology first developed 15 years ago, Singh says he and other scientists were building robots and robotics software from scratch. Today a variety of programming interfaces—some of which are open-source—exist, making scientists’ jobs a little easier. Now they just have to build the robot itself, install some software, and fine-tune some algorithms to fit their research goals.

“To efficiently explore and map our oceans, intelligent robots … are a necessity.”

While curious robot software systems vary, Girdhar says some of the basics remain the same. All curious robots need to collect data, and they do this with their ability to understand different undersea scenes without supervision. This involves “teaching” robots to detect a given class of oceanic features, such as different types of fish, coral, or sediment. The robots must also be able to detect anomalies in context, following a path that balances their programmed mission with their own curiosity.

This detection method is different from traditional undersea robots, which are preprogrammed to follow just one exploration path and look for one feature or a set of features, ignoring anomalies or changing oceanic conditions. One example of a traditional robot is Jason, a human-controlled “ROV,” or remotely operated vehicle, used by scientists at Woods Hole to study the seafloor.

Marine scientists see curious robots as a clear path forward. “To efficiently explore and map our oceans, intelligent robots with abilities to deliberate sensor data and make smart decisions are a necessity,” says Øyvind Ødegård, a marine archaeologist and Ph.D. candidate at the Centre for Autonomous Marine Operations and Systems at Norwegian University of Science and Technology.

Ødegård uses robots to detect and investigate shipwrecks, often in places too dangerous for human divers to explore—like the Arctic. Other undersea scientists in fields like biology and chemistry are starting to use curious robots to do things like monitor oil spills and searching for invasive species.

Compared to other undersea robots, Ødegård says, autonomous curious robots are best suited to long-term exploration. For shorter missions in already explored marine environments, it’s possible to preprogram robots to cope with predictable situations, says Ødegård. Yet, “for longer missions, with limited prior knowledge of the environment, such predictions become increasingly harder to make. The robot must have deliberative abilities or ‘intelligence’ that is robust enough for coping with unforeseen events in a manner that ensures its own safety and also the goals of the mission.”

One big challenge is sending larger amounts of data to human operators in real time. Water inhibits the movement of electromagnetic signals such as GPS, so curious robots can only communicate in small bits of data. Ødegård says to overcome this challenge, scientists are looking for ways to optimize data processing.

According to Singh, one next step in curious robot technology is teaching the robots to work in tandem with drones to give scientists pictures of sea ice from both above and below. Another is teaching the robots to deal with different species biases. For example, the robots frighten some fish and attract others—and this could cause data anomalies, making some species appear less or more abundant than they actually are.

Ødegård adds that new developments in robotics programs could allow even scientists without a background in robotics the opportunity to reap the benefits of robotics research. “I hope we will see more affordable robots that lower the threshold for playing with them and taking risks,” he says. “That way it will be easier to find new and innovative ways to use them.

]]>
Thu, 23 Feb 2017 15:27:38 +0000
<![CDATA[What Happens When Robots Become Role Models]]>http://2045.com/news/35112.html35112When you spend a lot of time with someone, their characteristics can rub off on you. But what happens when that someone is a robot?

As artificial intelligence systems become increasingly human, their abilities to influence people also improve. New Scientist reports that children who spend time with a robotic companion appear to pick up elements of its behavior. New experiments suggest that when kids play with a robot that’s a real go-getter, for instance, the child acquires some of its unremitting can-do attitude.

Other researchers are seeking to take advantage of similar effects in adults. A group at the Queensland University of Technology is enrolling a small team of pint-sized humanoid Nao robots to coach people to eat healthy. It hopes that chatting through diet choices with a robot, rather than logging calorie consumption on a smartphone, will be more effective in changing habits. It could work: as our own Will Knight has found out in the past, some conversational AI interfaces can be particularly compelling.

So as personal robots increasingly enter the home, robots may not just do our bidding—they might also become role models, too. And that means we must tread carefully, because while the stories above hint at the possibilities of positive reinforcement from automatons, others hint at potential negative effects.

Some parents, for instance, have complained that Amazon’s Alexa personal assistant is training their children to be rude. Alexa doesn’t need people to say please and thank you, will tolerate answering the same question over and over, and remains calm in the face of tantrums. In short: it doesn’t prime kids for how to interact with real people.

The process can flow both ways, of course. Researchers at Stanford University recently developed a robot that was designed to roam sidewalks, monitor humans, and learn how to behave with them naturally and appropriately. But as we’ve seen in the case of Microsoft’s AI chatbot, Tay—which swiftly became rude and anti-Semitic when it learned from Twitter users—taking cues from the crowd doesn’t always play out well.

In reality, there isn’t yet a fast track to creating robots that are socially intelligent—it remains one of the large unsolved problems of AI. That means that roboticists must instead carefully choose the traits they wish to be present in their machines, or else risk delivering armies of bad influence into our homes.

]]>
Wed, 22 Feb 2017 07:44:34 +0000