A Forum for discussing emerging smart discoveries and emerging technologies with built-in intelligence or embedded smarts, as well as the new cognitive skills needed to succeed in the smart economy. The Smart Future is already here, just the last page hasn't been written yet! Every advance brings benefits as well as intrusions.
Have your say !! Read, enjoy, explore, speculate, comment !!
Unexpected behavior in ferroelectric materials explored by researchers at Oak Ridge National Laboratory supports a new approach to information storage and processing known as memcomputing. (hi-res image)
OAK RIDGE, Tenn., Nov. 18, 2013—Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.”
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away -- a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: Neurons and their connections--synapses--can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.
The study is published as “Intermittency, quasiperiodicity, and chaos during scanning probe microscopy tip-induced ferroelectric domain switching,” and is available online. Coauthors are ORNL’s Stephen Jesse, Evgheni Strelcov, Sergei Kalinin and Amit Kumar; the National Academy of Sciences of Ukraine’s Anna Morozovska and Eugene Eliseev; the University of South Carolina’s Yuriy Pershin; and Ural Federal University’s Vladimir Shur. Ievlev, formerly of Ural Federal University, has joined ORNL as a postdoctoral fellow.
Part of this research was conducted at the Center for Nanophase Materials Sciences, which is sponsored at ORNL by the Scientific User Facilities Division in DOE’s Office of Basic Energy Sciences. CNMS is one of the five DOE Nanoscale Science Research Centers supported by the DOE Office of Science, premier national user facilities for interdisciplinary research at the nanoscale. Together the NSRCs comprise a suite of complementary facilities that provide researchers with state-of-the-art capabilities to fabricate, process, characterize and model nanoscale materials, and constitute the largest infrastructure investment of the National Nanotechnology Initiative. The NSRCs are located at DOE's Argonne, Brookhaven, Lawrence Berkeley, Oak Ridge and Sandia and Los Alamos national laboratories. For more information about the DOE NSRCs, please visit http://science.energy.gov/bes/suf/user-facilities/nanoscale-science-research-centers/.
ORNL is managed by UT-Battelle for the Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov.
ANN ARBOR, Mich.---The Mimosa plant, which folds its leaves when they're touched, is inspiring a new class of adaptive structures designed to twist, bend, stiffen and even heal themselves. University of Michigan researchers are leading their development.
Mechanical engineering professor Kon-Well Wang will present the team's latest work Feb. 19 at the American Association for the Advancement of Science's 2011 Annual Meeting in Washington D.C. He will also speak at a news briefing earlier that day. Wang is the Stephan P. Timoshenko Collegiate Professor of Mechanical Engineering and chair of the Department of Mechanical Engineering.
"This is quite different from other traditional adaptive materials approaches," Wang said. "In general, people use solid-state materials to make adaptive structures. This is really a unique concept inspired by biology."
Researchers at U-M and Penn State University are studying how plants like the Mimosa can change shape, and they're working to replicate the mechanisms in artificial cells. Today, their artificial cells are palm-size and larger. But they're trying to shrink them by building them with microstructures and nanofibers. They're also exploring how to replicate the mechanisms by which plants heal themselves.
"We want to put it all together to create hyper-cellular structures with circulatory networks," Wang said.
The Mimosa is among the plant varieties that exhibit specialized "nastic motions," large movements you can see in real time with the naked eye, said Erik Nielsen, assistant professor in the U-M Department of Molecular, Cellular and Developmental Biology.
The phenomenon is made possible by osmosis, the flow of water in and out of plants' cells. Triggers such as touch cause water to leave certain plant cells, collapsing them. Water enters other cells, expanding them. These microscopic shifts allow the plants to move and change shape on a larger scale.
It's hydraulics, the researchers say.
"We know that plants can deform with large actuation through this pumping action," Wang said. "This and several other characteristics of plant cells and cell walls have inspired us to initiate ideas that could concurrently realize many of the features that we want to achieve for adaptive structures."
Nielsen believes nastic movements might be a good place to start trying to replicate plant motions because they don't require new growth or a reorganization of cells.
"These rapid, nastic motions are based on cells and tissues that are already there," Nielsen said. "It's easy for a plant to build new cells and tissues during growth, but it's not as easy to engineer an object or machine to completely change the way it's organized. We hope studying these motions can inform us about how to make efficient adaptive materials that display some of the same types of flexibility that we see in biological systems."
When this technology matures, Wang said it could enable robots that change shape like elephant trunks or snakes to maneuver under a bridge or through a tunnel, but then turn rigid to grab a hold of something. It also could lead to morphing wings that would allow airplanes to behave more like birds, changing their wing shape and stiffness in response to their environment or the task at hand.
At U-M, Michael Mayer, associate professor in the departments of Biomedical Engineering and Chemical Engineering, is also involved in the research. At Penn State, the project involves Charles Bakis, distinguished professor of engineering sciences and mechanics, and Christopher Rahn, professor of mechanical engineering. This project is currently funded by the National Science Foundation.
EDITORS: Professor Kon-Well Wang will speak at a news briefing on Saturday, Feb. 19 at 11 a.m. in room 202B at the Washington Convention Center. He will give his full presentation, "Learning from Plants: Bio-Inspired Multi-Functional Adaptive Structural Systems," during the session "If Termites Can Do It, Why Can't Humans?" 1:30-4:30 p.m. Saturday, Feb. 19, in room 145A.
The University of Michigan College of Engineering is ranked among the top engineering schools in the country. At $180 million annually, its engineering research budget is one of largest of any public university. Michigan Engineering is home to 11 academic departments, numerous research centers and expansive entrepreneurial programs. The College plays a leading role in the Michigan Memorial Phoenix Energy Institute and hosts the world-class Lurie Nanofabrication Facility. Michigan Engineering's premier scholarship, international scale and multidisciplinary scope combine to create The Michigan Difference. Find out more at http://www.engin.umich.edu/.
To build the next generation of sensors – with applications ranging from medical devices to robotics to new consumer goods – Chang Liu looks to biology.
Liu, professor of mechanical engineering and electrical engineering and computer science at Northwestern University's McCormick School of Engineering and Applied Science, is using insights from nature as inspiration for both touch and flow sensors — areas that currently lack good sensors for recording and communicating the senses.
Liu will discuss his research in a symposium at the annual meeting of the American Association for the Advancement of Science (AAAS) in Washington, D.C., to be held from 1:30 to 4:30 p.m. Saturday, Feb. 19.
For the past 10 years, Liu has led a research group that develops artificial hair cell sensors. Hair cells provide a variety of sensing abilities for different animals: they help humans hear, they help insects detect vibration, and they form the lateral line system that allows fish to sense the flow of water around them.
"The hair cell is interesting because biology uses this same fundamental structure to serve a variety of purposes," Liu says. "This differs from how engineers typically design sensors, which are often used for a specific task."
By creating artificial hair cells using micro- and nanofabrication technology, Liu's group is increasing sensor performance while deepening the understanding of how different creatures use these sensors. For example, every fish in the world uses hair cells in the lateral line as sensors, but so far no manmade vehicle does. If a submarine had sensors similar to that of a fish, it could record much more information on water movement.
Liu's current focus is the medical application of these biologically inspired sensors. He hopes that artificial hair cells could be used to measure acoustics in an artificial cochlea or could be embedded as flow sensors in a wide variety of medical devices.
Liu is also developing new touch sensors to improve minimally invasive surgery techniques. Currently many minimally invasive procedures are conducted through a catheter that is inserted into the body and controlled by a doctor on the outside.
"During a heart treatment, the doctor controlling the catheter has no sense of touch and cannot tell if the catheter has touched the heart wall and successfully completed the therapeutic treatment," Liu explains. "We want to use microfabrication technology to put sensors on the end of the catheter to provide feedback."
In order to achieve his goals, Liu has assembled a multidisciplinary team that includes biologists, engineers, materials scientists and physicians. A mix of fundamental and applied research is necessary to make biologically inspired sensors a reality, he says.
"Using a bio-inspired approach is really important," Chang says. "Nature has a lot of wonderful examples that can challenge us. No matter how good some of our technology is, we still can't do some of the basic things that nature can. Nature holds the secret for the next technology breakthrough and disruptive innovation. We are on a mission to find it."
Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner look on as the black robot deceives the red robot into thinking it is hiding down the left corridor. (Click image for high-resolution version. Credit: Gary Meek)
A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it’s actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception.
We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.
The results of robot experiments and theoretical and cognitive deception modeling were published online on September 3 in the International Journal of Social Robotics. Because the researchers explored the phenomenon of robot deception from a general perspective, the study’s results apply to robot-robot and human-robot interactions. This research was funded by the Office of Naval Research.
In the future, robots capable of deception may be valuable for several different areas, including military and search and rescue operations. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.
“Most social robots will probably rarely use deception, but it’s still an important tool in the robot’s interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception,” said the study’s co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.
For this study, the researchers focused on the actions, beliefs and communications of a robot attempting to hide from another robot to develop programs that successfully produced deceptive behavior. Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception. Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception — there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.
Once a situation was deemed to warrant deception, the robot carried out a deceptive act by providing a false communication to benefit itself. The technique developed by the Georgia Tech researchers based a robot’s deceptive action selection on its understanding of the individual robot it was attempting to deceive.
To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider’s location to the seeker robot.
“The hider’s set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left,” explained Wagner.
The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot’s inability to knock over the correct markers to produce the desired deceptive communication.
While there may be advantages to creating robots with the capacity for deception, there are also ethical implications that need to be considered to ensure that these creations are consistent with the overall expectations and well-being of society, according to the researchers.
“We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects,” explained Arkin. “We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.”
The BBC reports that a simulated brain experiment is getting closer to real thought. A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains. The "Blue Brain" has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory. Scaling the simulation to the human brain is only a matter of money, says the project's head. The work was presented this week at the European Future Technologies meeting in Prague
The next issue of New Scientist will feature an article about smart robots that evolve and grow in real time or unnatural selection..more disruptive technology on the way. We will see how much of this can be turned into commercial value, instead of just being an acedmic curiosity
According to New Scientist:
"Living creatures took millions of years to evolve from amphibians to four-legged mammals - with larger, more complex brains to match. Now an evolving robot has performed a similar trick in hours, thanks to a software "brain" that automatically grows in size and complexity as its physical body develops.
Existing robots cannot usually cope with physical changes - the addition of a sensor or new type of limb, say - without a complete redesign of their control software, which can be time-consuming and expensive.
So artificial intelligence engineer Christopher MacLeod and his colleagues at the Robert Gordon University in Aberdeen, UK, created a robot that adapts to such changes by mimicking biological evolution. "If we want to make really complex humanoid robots with ever more sensors and more complex behaviours, it is critical that they are able to grow in complexity over time - just like biological creatures did," he says."
Expert, Consultant and Keynote Speaker on Emerging Smart Technologies, Innovation, Strategic Foresight, Business Development, Lateral Creative Thinking and author of an upcoming book on the Smart Economy "
The Smart Technology Blog: The Smart Economy -- Read, enjoy, explore, speculate, comment !!
To arrange for an in house presentation or briefing on smart technologysee here
A while back when I was at the Idea Lab at the Design Exchange, I remember sitting over a coffee and exploring a thought exercise with some clients.
The question under debate was: What are the generic elements or activities that would accelerate any new technology platform, (such as smart technologies) and overcome the initial market interia. Simply put, Is there a generic technology evolution matrix?
One was having cartoonists make fun of your technology ( No, smart technologies aren't on Dilbert's radar screen yet, but any day now I hope) Another key event would be to have some influential CEO highlight the benefits of your technology in a milestone keynote speech (act as a complimentor-in business strategy terms. )
Well for smart technologies, that breakthrough event may have happened this month on November 6th, when IBM’s CEO, Sam Palmisano, outlined a new agenda for building a smarter planet - during a speech at the Council on Foreign Relation. (hat tip to Adam Christensen at IBM HQ)
From IBM's blog:
"In the speech, he outlines a number of the challenges faced today by people, governments, businesses and organizations. A lack of clean water for a fifth of the world’s population. Energy systems that waste more energy than they produce. Traffic in our cities that clogs roads and chokes economic growth.
Clearly there are no simple solutions for these problems.
Technology can play a big role in helping find answers to these problems. While the Internet currently connects more than a billion people, in just a few years, it will connect more than a trillion objects. Everything from cell phones, cars, roads, buildings, and even objects in nature itself, will have embedded technology and be connected to one another, enabling tremendous advances in how we understand how the world works and make smarter decisions to make it work better.
But technology is just part of the solution. Without the people, policies and culture to inspire and execute the change, nothing ultimately gets done."
From Sam’s speech:
[...] These collective realizations have reminded us that we are all now connected—economically, technically and socially. But we're also learning that being connected is not sufficient. Yes, the world continues to get "flatter." And yes, it continues to get smaller and more interconnected. But something is happening that holds even greater potential. In a word, our planet is becomingsmarter.
This isn't just a metaphor.I mean infusing intelligence into the way the world literally works—the systems and processes that enable physical goods to be developed, manufactured, bought and sold… services to be delivered… everything from people and money to oil, water and electrons to move… and billions of people to work and live.
What's making this possible?
First, our world is becoming instrumented: The transistor, invented 60 years ago, is the basic building block of the digital age. Now, consider a world in which there are a billion transistors per human, each one costing one ten-millionth of a cent. We'll have that by 2010. There will likely be 4 billion mobile phone subscribers by the end of this year… and 30 billion Radio Frequency Identification tags produced globally within two years. Sensors are being embedded across entire ecosystems—supply-chains, healthcare networks, cities… even natural systems like rivers.
Second, our world is becoming interconnected: Very soon there will be 2 billion people on the Internet. But in an instrumented world, systems and objects can now "speak" to one another, too. Think about the prospect of a trillion connected and intelligent things—cars, appliances, cameras, roadways, pipelines… even pharmaceuticals and livestock. The amount of information produced by the interaction of all those things will be unprecedented.
Third, all things are becoming intelligent: New computing models can handle the proliferation of end-user devices, sensors and actuators and connect them with back-end systems. Combined with advanced analytics, those supercomputers can turn mountains of data into intelligence that can be translated into action, making our systems, processes and infrastructures more efficient, more productive and responsive—in a word, smarter.
What this means is that the digital and physical infrastructures of the world are converging. Computational power is being put into things we wouldn't recognize as computers. Indeed, almost anything—any person, any object, any process or any service, for any organization, large or small—can become digitally aware and networked.
With so much technology and networking abundantly available at such low cost, what wouldn't you enhance? What service wouldn't you provide a customer, citizen, student or patient? What wouldn't you connect? What information wouldn't you mine for insight?
The answer is, you or your competitor—another company, or another city or nation—will do all of that. You will do it because you can—the technology is available and affordable.
But there is another reason we will make our companies, institutions and industries smarter. Because we must. Not just at moments of widespread shock, but integrated into our day-to-day operations. These mundane processes of business, government and life—which are ultimately the source of those "surprising" crises—are not smart enough to be sustainable.
Leaders will need to hone their collaboration skills, because we will need leadership that pulls across systems. We will need to bring together stakeholders and experts from across business, government and academia, and all of them will need to move outside their traditional comfort zones.
I’m struck by the questions this raises. What investments need to be made by both public and private institutions? What policy issues need to be debated and resolved? What role can individual citizens and employees play in helping bring about meaningful change?
I’m also struck by the potential opportunities inherent in finding solutions to these problems.
FOR RELEASE: Tuesday, August 05, 2008 University of Arkansas Researchers Combine Technologies to Heal Patients, Virtually
University of Arkansas researchers seeking new ways to make health care more efficient and cost-effective have built a new kind of hospital: one that uses location aware systems, sensors, smart devices, radio-frequency identification and virtual reality.
Anyone can visit this hospital, dubbed the Razorback Hospital by some, on the University of Arkansas island in Second Life, a free online 3-D virtual world.
This spring and summer about 40 university students, working with six high school students from the Environmental and Spatial Technology Initiative (EAST), worked with professors Craig Thompson and Fran Hagstrom to create the virtual hospital and supply chain in Second Life. They will offer a public demonstration and discussion of this world at noon on Thursday, Aug. 7, at the Northwest Arkansas EAST Training Center in Room 263 of the J.B. Hunt Transport Services Inc. Center for Academic Excellence.
The students have visited local hospitals to understand what needs exist. They have created a building with patient rooms, intensive care, a diagnostics suite, a pharmacy and supply rooms. Because limits in Second Life only exist in the imagination, professors and students can create things that they have reason to believe will exist soon in the real world, then interact with those items to see how they work.
“Students in my spring artificial intelligence class developed ‘smart’ pill bottles that only the owner can open and that know their pill count, smart shelves that know when to re-order, a restocking robot, wheelchairs that can follow way points and virtual RFID readers and tags,” said Thompson, a professor in the computer science and computer engineering department and holder of the Acxiom Database Chair in Engineering.
The students also had to create something that most avatars, or the virtual beings within Second Life, lack – internal organs. Now the virtual doctors in the hospital can perform virtual organ transplant operations.
“We feel there is huge potential here – well beyond health care or the groups we have touched so far,” said Adam Barnes, a staff member on the project. “The project is really about the future world we will all live in – where every object is a network object and humans can communicate with things as well as they do with each other.”
“This program cuts across the boundaries of departments and colleges, with participants and ideas from the colleges of engineering, business, education and arts and sciences,” said Malcolm Williamson, a research associate with the Center for Advanced Spatial Technologies, who works with the project. “This is a perfect example of what the J.B. Hunt Transportation Center for Excellence was designed for.” Partners in the project include the Center for Innovation in Healthcare Logistics, the RFID Research Center, the Center for Advanced Spatial Technologies, the Arkansas Science and Technology Authority and the EAST Initiative.
EAST is a performance-based learning environment for high school students that combines problem-based service learning and advanced technological applications to develop solutions to everyday challenges. There are more than 170 EAST classrooms in Arkansas. In total, there are 193 EAST classrooms in seven states, including Hawaii, Oklahoma, Illinois and California. Please visit www.EASTinitiative.org for more information on EAST.
Expert, Consultant, Keynote Speaker and Lecturer on Emerging Smart Technologies, Innovation, Entrepreneurship, Strategic Foresight, Business Development, Lateral Creative Thinking and author of an upcoming book on the Smart Economy
The Smart Technology Blog: The Smart Economy -- Read, enjoy, explore, speculate, comment !!
To arrange for an in house presentation or briefing on smart technologysee here
The annual IEEE International Conference on Tools with Artificial Intelligence (ICTAI) provides a major international forum where the creation and exchange of ideas related to artificial intelligence are fostered among academia, industry, and government agencies. The conference facilitates the cross-fertilization of these ideas and promotes their transfer into practical tools, for developing intelligent systems and pursuing artificial intelligence applications. The ICTAI encompasses all the technical aspects of specifying, developing and evaluating the theoretical underpinnings and applied mechanisms of the AI based components of computer tools (i.e. algorithms, methodologies, architectures, and languages).
TOPICS OF INTEREST include (but are not limited to) the following:
The submissions should contain original and high quality work, not submitted or published elsewhere. The accepted papers will be included in the ICTAI 2008 proceedings published by the IEEE Computer Society. A selection of the best papers of the conference will be published in a special issue of the International Journal on Artificial Intelligence Tools (IJAIT), which is SCI Indexed.
Paper submission: June 30, 2008 (extended) Notification of acceptance: July 23, 2008 Camera ready paper: August 15, 2008
Nikolaos Bourbakis, Wright State Univ, USA (Chair) C. V. Ramamoorthy, UC at Berkeley, USA Farokh Bastani, Univ of Texas at Dallas, USA Jeffrey Tsai, Univ of Illinois at Chicago, USA Benjamin Wah, Univ of Illinois at Urbana-Champaign, USA