Living robots built using frog cells

Now a team of scientists has repurposed living cells — scraped from frog embryos — and assembled them into entirely new life-forms. These millimeter-wide “xenobots” can move toward a target, perhaps pick up a payload (like a medicine that needs to be carried to a specific place inside a patient) — and heal themselves after being cut.

“These are novel living machines,” says Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

The new creatures were designed on a supercomputer at UVM — and then assembled and tested by biologists at Tufts University. “We can imagine many useful applications of these living robots that other machines can’t do,” says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, “like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque.”

The results of the new research were published January 13 in the Proceedings of the National Academy of Sciences.

Bespoke Living Systems

People have been manipulating organisms for human benefit since at least the dawn of agriculture, genetic editing is becoming widespread, and a few artificial organisms have been manually assembled in the past few years — copying the body forms of known animals.

But this research, for the first time ever, “designs completely biological machines from the ground up,” the team writes in their new study.

With months of processing time on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, the team — including lead author and doctoral student Sam Kriegman — used an evolutionary algorithm to create thousands of candidate designs for the new life-forms. Attempting to achieve a task assigned by the scientists — like locomotion in one direction — the computer would, over and over, reassemble a few hundred simulated cells into myriad forms and body shapes. As the programs ran — driven by basic rules about the biophysics of what single frog skin and cardiac cells can do — the more successful simulated organisms were kept and refined, while failed designs were tossed out. After a hundred independent runs of the algorithm, the most promising designs were selected for testing.

Then the team at Tufts, led by Levin and with key work by microsurgeon Douglas Blackiston — transferred the in silico designs into life. First they gathered stem cells, harvested from the embryos of African frogs, the species Xenopus laevis. (Hence the name “xenobots.”) These were separated into single cells and left to incubate. Then, using tiny forceps and an even tinier electrode, the cells were cut and joined under a microscope into a close approximation of the designs specified by the computer.

Assembled into body forms never seen in nature, the cells began to work together. The skin cells formed a more passive architecture, while the once-random contractions of heart muscle cells were put to work creating ordered forward motion as guided by the computer’s design, and aided by spontaneous self-organizing patterns — allowing the robots to move on their own.

These reconfigurable organisms were shown to be able move in a coherent fashion — and explore their watery environment for days or weeks, powered by embryonic energy stores. Turned over, however, they failed, like beetles flipped on their backs.

Later tests showed that groups of xenobots would move around in circles, pushing pellets into a central location — spontaneously and collectively. Others were built with a hole through the center to reduce drag. In simulated versions of these, the scientists were able to repurpose this hole as a pouch to successfully carry an object. “It’s a step toward using computer-designed organisms for intelligent drug delivery,” says Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center.

Living Technologies

Many technologies are made of steel, concrete or plastic. That can make them strong or flexible. But they also can create ecological and human health problems, like the growing scourge of plastic pollution in the oceans and the toxicity of many synthetic materials and electronics. “The downside of living tissue is that it’s weak and it degrades,” say Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades.” And when they stop working — death — they usually fall apart harmlessly. “These xenobots are fully biodegradable,” say Bongard, “when they’re done with their job after seven days, they’re just dead skin cells.”

Your laptop is a powerful technology. But try cutting it in half. Doesn’t work so well. In the new experiments, the scientists cut the xenobots and watched what happened. “We sliced the robot almost in half and it stitches itself back up and keeps going,” says Bongard. “And this is something you can’t do with typical machines.”

Cracking the Code

Both Levin and Bongard say the potential of what they’ve been learning about how cells communicate and connect extends deep into both computational science and our understanding of life. “The big question in biology is to understand the algorithms that determine form and function,” says Levin. “The genome encodes proteins, but transformative applications await our discovery of how that hardware enables cells to cooperate toward making functional anatomies under very different conditions.”

To make an organism develop and function, there is a lot of information sharing and cooperation — organic computation — going on in and between cells all the time, not just within neurons. These emergent and geometric properties are shaped by bioelectric, biochemical, and biomechanical processes, “that run on DNA-specified hardware,” Levin says, “and these processes are reconfigurable, enabling novel living forms.”

The scientists see the work presented in their new PNAS study — “A scalable pipeline for designing reconfigurable organisms,” — as one step in applying insights about this bioelectric code to both biology and computer science. “What actually determines the anatomy towards which cells cooperate?” Levin asks. “You look at the cells we’ve been building our xenobots with, and, genomically, they’re frogs. It’s 100% frog DNA — but these are not frogs. Then you ask, well, what else are these cells capable of building?”

“As we’ve shown, these frog cells can be coaxed to make interesting living forms that are completely different from what their default anatomy would be,” says Levin. He and the other scientists in the UVM and Tufts team — with support from DARPA’s Lifelong Learning Machines program and the National Science Foundation — believe that building the xenobots is a small step toward cracking what he calls the “morphogenetic code,” providing a deeper view of the overall way organisms are organized — and how they compute and store information based on their histories and environment.

Future Shocks

Many people worry about the implications of rapid technological change and complex biological manipulations. “That fear is not unreasonable,” Levin says. “When we start to mess around with complex systems that we don’t understand, we’re going to get unintended consequences.” A lot of complex systems, like an ant colony, begin with a simple unit — an ant — from which it would be impossible to predict the shape of their colony or how they can build bridges over water with their interlinked bodies.

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. Much of science is focused on “controlling the low-level rules. We also need to understand the high-level rules,” he says. “If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex,” Levin says. “A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

In other words, “this study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences,” Levin says — whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.

“There’s all of this innate creativity in life,” says UVM’s Josh Bongard. “We want to understand that more deeply — and how we can direct and push it toward new forms.”

Image credit: www.sciencedaily.com

Source

Artificial intelligence is being used to make decisions about your life whether you like it or not

It’s a common psychological phenomenon: repeat any word enough times, and it eventually loses all meaning, disintegrating like soggy tissue into phonetic nothingness. For many of us, the phrase “artificial intelligence” fell apart in this way a long time ago. AI is everywhere in tech right now, said to be powering everything from your TV to your toothbrush, but never have the words themselves meant less.

It shouldn’t be this way.

While the phrase “artificial intelligence” is unquestionably, undoubtedly misused, the technology is doing more than ever — for both good and bad. It’s being deployed in health care and warfare; it’s helping people make music and books; it’s scrutinizing your resume, judging your creditworthiness, and tweaking the photos you take on your phone. In short, it’s making decisions that affect your life whether you like it or not.ARTIFICIAL INTELLIGENCE IS BEING USED TO MAKE DECISIONS ABOUT YOUR LIFE WHETHER YOU LIKE IT OR NOT

It can be difficult to square with the hype and bluster with which AI is discussed by tech companies and advertisers. Take, for example, Oral-B’s Genius X toothbrush, one of the many devices unveiled at CES this year that touted supposed “AI” abilities. But dig past the top line of the press release, and all this means is that it gives pretty simple feedback about whether you’re brushing your teeth for the right amount of time and in the right places. There are some clever sensors involved to work out where in your mouth the brush is, but calling it artificial intelligence is gibberish, nothing more.

When there’s not hype involved, there’s misunderstanding. Press coverage can exaggerate research, sticking a picture of a Terminator on any vaguely AI story. Often this comes down to confusion about what artificial intelligence even is. It can be a tricky subject for non-experts, and people often mistakenly conflate contemporary AI with the version they’re most familiar with: a sci-vision of a conscious computer many times smarter than a human. Experts refer to this specific instance of AI as artificial general intelligence, and if we do ever create something like this, it’ll likely to be a long way in the future. Until then, no one is helped by exaggerating the intelligence or capabilities of AI systems.

It’s better, then, to talk about “machine learning” rather than AI. This is a subfield of artificial intelligence, and one that encompasses pretty much all the methods having the biggest impact on the world right now (including what’s called deep learning). As a phrase, it doesn’t have the mystique of “AI,” but it’s more helpful in explaining what the technology does.

How does machine learning work? Over the past few years, I’ve read and watched dozens of explanations, and the distinction I’ve found most useful is right there in the name: machine learning is all about enabling computers to learn on their own. But what that means is a much bigger question.

Let’s start with a problem. Say you want to create a program that can recognize cats. (It’s always cats for some reason). You could try and do this the old-fashioned way by programming in explicit rules like “cats have pointy ears” and “cats are furry.” But what would the program do when you show it a picture of a tiger? Programming in every rule needed would be time-consuming, and you’d have to define all sorts of difficult concepts along the way, like “furriness” and “pointiness.” Better to let the machine teach itself. So you give it a huge collection of cat photos, and it looks through those to find its own patterns in what it sees. It connects the dots, pretty much randomly at first, but you test it over and over, keeping the best versions. And in time, it gets pretty good at saying what is and isn’t a cat.

So far, so predictable. In fact, you’ve probably read an explanation like this before, and I’m sorry for it. But what’s important is not reading the gloss but really thinking about what that gloss implies. What are the side effects of having a decision-making system learn like this?

Well, the biggest advantage of this method is the most obvious: you never have to actually program it. Sure, you do a hell of a lot of tinkering, improving how the system processes the data and coming up with smarter ways of ingesting that information, but you’re not telling it what to look for. That means it can spot patterns that humans might miss or never think of in the first place. And because all the program needs is data — 1s and 0s — there are so many jobs you can train it on because the modern world is just stuffed full of data. With a machine learning hammer in your hand, the digital world is full of nails ready to be bashed into place.

But then think about the disadvantages, too. If you’re not explicitly teaching the computer, how do you know how it’s making its decisions? Machine learning systems can’t explain their thinking, and that means your algorithm could be performing well for the wrong reasons. Similarly, because all the computer knows is the data you feed it, it might pick up a biased view of the world, or it might only be good at narrow tasks that look similar to the data it’s seen before. It doesn’t have the common sense you’d expect from a human. You could build the best cat-recognizer program in the world and it would never tell you that kittens shouldn’t drive motorbikes or that a cat is more likely to be called “Tiddles” than “Megalorth the Undying.”TEACHING COMPUTERS TO LEARN FOR THEMSELVES IS A BRILLIANT SHORTCUT — AND LIKE ALL SHORTCUTS, IT INVOLVES CUTTING CORNERS

Teaching computers to learn for themselves is a brilliant shortcut. And like all shortcuts, it involves cutting corners. There’s intelligence in AI systems, if you want to call it that. But it’s not organic intelligence, and it doesn’t play by the same rules humans do. You may as well ask: how clever is a book? What expertise is encoded in a frying pan?

So where do we stand now with artificial intelligence? After years of headlines announcing the next big breakthrough (which, well, they haven’t quite stopped yet), some experts think we’ve reached something of a plateau. But that’s not really an impediment to progress. On the research side, there are huge numbers of avenues to explore within our existing knowledge, and on the product side, we’ve only seen the tip of the algorithmic iceberg.

Kai-Fu Lee, a venture capitalist and former AI researcher, describes the current moment as the “age of implementation” — one where the technology starts “spilling out of the lab and into the world.” Benedict Evans, another VC strategist, compares machine learning to relational databases, a type of enterprise software that made fortunes in the ‘90s and revolutionized whole industries, but that’s so mundane your eyes probably glazed over just reading those two words. The point both these people are making is that we’re now at the point where AI is going to get normal fast. “Eventually, pretty much everything will have [machine learning] somewhere inside and no-one will care,” says Evans.

He’s right, but we’re not there yet.

In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined. So in this week’s special issue of The Verge, AI Week, we’re going to show you how it’s all happening right now, how this technology is being used to change things. Because in the future, it’ll be so normal you won’t even notice.

Image credit: ww.theverge.com

Source

Talking with Scoble about the Contextual Web

diagram800

Here’s the audio track of a great conversation I had back in 2012 with my good friend Robert Scoble at his home in Half Moon Bay, CA. We talked a lot about context, since this was right before he announced the Age of Context book he is doing with Shel Israel. In this conversation Robert and I talked about the contextual web, Google Glass (long before we had it), privacy, personas, contextual content, contextual marketing, and why we need to start building some open standards regarding how context is discovered, communicated, and permissioned. Even though this discussion was over a year ago, I think a lot of what we talked about is still quite relevant. 

Note: To hear the audio just click on the play button or links in Robert’s post… his post is embedded as a live object.

{source}<center>
<!– You can place html anywhere within the source tags –>
<div id=”fb-root”></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = “//connect.facebook.net/en_US/all.js#xfbml=1”; fjs.parentNode.insertBefore(js, fjs); }(document, ‘script’, ‘facebook-jssdk’));</script>
<div class=”fb-post” data-href=”https://www.facebook.com/RobertScoble/posts/314553685305013″><div class=”fb-xfbml-parse-ignore”><a href=”https://www.facebook.com/RobertScoble/posts/314553685305013″>Post</a> by <a href=”https://www.facebook.com/RobertScoble”>Robert Scoble</a>.</div></div>

<script language=”javascript” type=”text/javascript”>
// You can place JavaScript like this

</center></script>
<?php
// You can place PHP like this

?>
{/source} 

 
 The pic below shows a snippet of Robert’s whiteboard after our discussion