Tag Archives: Computing

DNA May Be the Next Frontier of Computing and Data Storage

DNA May Be the Next Frontier of Computing and Data Storage

GUEST POST from Greg Satell

Data, as many have noted, has become the new oil, meaning that we no longer regard the information we store as merely a cost of doing business, but a valuable asset and a potential source of competitive advantage. It has become the fuel that powers advanced technologies such as machine learning.

A problem that’s emerging, however, is that our ability to produce data is outstripping our ability to store it. In fact, an article in the journal Nature predicts that by 2040, data storage would consume 10–100 times the expected supply of microchip-grade silicon, using current technology. Clearly, we need a data storage breakthrough.

One potential solution is DNA, which is a million times more information dense than today’s flash drives. It also is more stable, more secure and uses minimal energy. The problem is that it is currently prohibitively expensive. However, a startup that has emerged out of MIT, called CATALOG, may have found the breakthrough we’re looking for: low-cost DNA Storage.

The Makings Of A Scientist-Entrepreneur

Growing up in his native Korea, Hyunjun Park never planned on a career in business, much less the technology business, but expected to become a biologist. He graduated with honors from Seoul National University and then went on to earn a PhD from the University of Wisconsin. Later he joined Tim Lu’s lab at MIT, which specializes in synthetic biology.

In an earlier time, he would have followed an established career path, from PhD to post-doc to assistant professor to tenure. These days, however, there is a growing trend for graduate students to get an entrepreneurial education in parallel with the traditional scientific curriculum. Park, for example, participated in both the Wisconsin Entrepreneurial Bootcamp and Start MIT.

He also met a kindred spirit in Nate Roquet, a PhD candidate who, about to finish his thesis, had started thinking about what to do next. Inspired by a talk from given by the Chief Science Officer at a seed fund, IndieBio, the two began to talk in earnest about starting a company together based on their work in synthetic biology.

As they batted around ideas, the subject of DNA storage came up. By this time, the advantages of the technology were well known but it was not considered practical, costing hundreds of thousands of dollars to store just a few hundred megabytes of data. However, the two did some back-of -the-envelope calculations and became convinced they could do it far more cheaply.

Moving From Idea To Product

The basic concept of DNA storage is simple. Essentially, you just encode the ones and zeros of digital code into the T, G, A and C’s of genetic code. However, stringing those genetic molecules together is tedious and expensive. The idea that Park and Roquet came up with was to use enzymes to alter strands of DNA, rather than building them up piece by piece.

Contrary to popular opinion, most traditional venture capital firms, such as those that populate Sand Hill Road in Silicon Valley, don’t invest in ideas. They invest in products. IndieBio, however, isn’t your typical investor. They give only give a small amount of seed capital, but offer other services, such as wet labs, entrepreneurial training and scientific mentorship. Park and Roquet reached out to them and found some interest.

“We invest in problems, not necessarily solutions,” Arvind Gupta, Founder at IndieBio told me. “Here the problem is massive. How do you keep the world’s knowledge safe? We know DNA can last thousands of years and can be replicated very inexpensively. That’s a really big deal and Hyunjun and Nate’s approach was incredibly exciting.”

Once the pair entered IndieBio’s four-month program, they found both promise and disappointment. Their approach could dramatically reduce the cost of storing information in DNA, but not nearly quickly enough to build a commercially viable product. They would need to pivot if they were going to turn their idea into an actual business.

Scaling To Market

One flaw in CATALOG’s approach was that the process was too complex to scale. Yet they found that by starting with just a few different DNA strands and attaching them together, much like a printing press pre-arranges words in a book, they could come up with something that was not only scalable, but commercially viable from a cost perspective.

The second problem was more thorny. Working with enzymes is incredibly labor intensive and, being biologists, Park and Roquet didn’t have the mechanical engineering expertise to make their process feasible. Fortunately, an advisor, Darren Link, connected the pair to Cambridge Consultants, an innovation consultancy that could help them.

“We started looking at the problem and it seemed that, on paper at least, we could make it work,” Richard Hammond, Technology Director and Head of Synthetic Biology at Cambridge Consultants, told me. “Now we’re about halfway through making the first prototype and we believe we can make it work and scale it significantly. We’re increasingly confident that we can solve the core technical challenges.”

In 2018 CATALOG introduced the world to Shannon, its prototype DNA writer. In 2022 CATALOG announced its DNA computation work at the HPC User Forum. But CATALOG isn’t without competition in the space. For example, Western Digital‘s LTO-9 from 2022, can store 18 TB per cartridge. CATALOG for its part is partnering with Seagate “on several initiatives to advance scalable and automated DNA-based storage and computation platforms, including making DNA-based platforms up to 1000 times smaller.” That should make the process competitive for archival storage, such as medical and legal records as well as storing film databases at movie studios.

“I think the fact that we’re inventing a completely new medium for data storage is really exciting,” Park told me. “I don’t think that we know yet what the true potential is because the biggest use cases probably don’t exist yet. What I do know is that our demand for data storage will soon outstrip our supply and we are thrilled about the possibility of solving that problem.”

Going Beyond Digital

A generation ago, the task of improving data storage would have been seen as solely a computer science problem. Yet today, the digital era is ending and we’re going to have to look further and wider for solutions to the problems we face. With the vast improvement in genomics, which is far outpacing Moore’s law these days, we can expect biology to increasingly play a role.

“Traditional, information technology has been strictly the realm of electrical engineers, physicists and coders,” Gupta of IndieBio told me. “What we’re increasingly finding is that biology, which has been honed for millions of years by evolution, can often point the way to solutions that are more robust and potentially, much cheaper and more efficient.”

Yet this phenomenon goes far beyond biology. We’re also seeing similar accelerations in other fields, such as materials science and space-related technologies. We’re also seeing a new breed of investors, like IndieBio, that focus specifically on scientist entrepreneurs. “I consider myself a product of the growing ecosystem for scientific entrepreneurs at universities and in the investor community,” Park told me.

Make no mistake. We are entering a new era of innovation and the traditional Silicon Valley approach will not get us where we need to go. Instead, we need to forge greater collaboration between the scientific community, the investor community and government agencies to solve problems that are increasingly complex and interdisciplinary.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Bringing Yin and Yang to the Productivity Zone

Bringing Yin and Yang to the Productivity Zone

GUEST POST from Geoffrey A. Moore

Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.

The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.

It is a rare IT organization that does not know how to maintain its own systems. That’s Job 1, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs—specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.

Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot—indeed, must not—commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you—they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.

Given all that, here’s the playbook to follow:

  1. If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
  2. To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
  3. The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
  4. Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.

Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Brave Post-Coronavirus New World

A Brave Post-Coronavirus New World

GUEST POST from Greg Satell

In 1973, in the wake of the Arab defeat in the Yom Kippur war with Israel, OPEC instituted an oil embargo on America and its allies. The immediate effects of the crisis was a surge in gas prices and a recession in the west. The ripple effects, however, were far more complex and played out over decades.

The rise in oil prices brought much needed hard currency to the Soviet Union, prolonging its existence and setting the stage for its later demise. The American auto industry, with its passion for big, gas guzzling cars, lost ground to the emergent. The new consciousness of conservation led to the establishment of the Department of Energy.

Today the Covid-19 crisis has given a shock to the system and we’re at a similar inflection point. The most immediate effects have been economic recession and the rapid adoption of digital tools, such as video conferencing. Over the next decade or so, however, the short-term impacts will combine with other more longstanding trends to reshape technology and society.

Pervasive Transformation

We tend to think about innovation as if it were a single event, but the truth is that it’s a process of a process of discovery, engineering and transformation, which takes decades to run its course. For example, Alan Turing discovered the principles of a universal computer in 1936, but it wasn’t until the 1950s and 60s that digital computers became commercially available.

Even then, digital technology, didn’t really begin to become truly transformational until the mid-90s. By this time, it was well understood enough to make the leap from highly integrated systems to modular ecosystems, making the technology cheaper, more functional and more reliable. The number of applications exploded and the market grew quickly.

Still, as the Covid-19 crisis has made clear, we’ve really just been scratching the surface. Although digital technology certainly accelerated the pace of work, it did fairly little to fundamentally change the nature of it. People still commuted to work in an office, where they would attend meetings in person, losing hours of productive time each and every day.

Over the next decade, we will see pervasive transformation. As Mark Zuckerberg has pointed out, once people can work remotely, they can work from anywhere, which will change the nature of cities. Instead of “offsite” meetings, we may very well have “onsite” meetings where people from their home cities over travel to headquarters to do more active collaboration.

These trends will combine with nascent technologies like artificial intelligence and blockchain to revolutionize business processes and supply chains. Organizations that cannot adopt key technologies will very likely find themselves unable to compete.

The Rise of Heterogeneous Computing

The digital age did not begin with personal computers in the 70s and 80s, but started back in the 1950s with the shift from electromechanical calculating machines to transistor based mainframes. However, because so few people used computers back then—they were largely relegated to obscure back office tasks and complex scientific calculations—the transformation took place largely out of public view.

A similar process is taking place today with new architectures such as quantum and neuromorphic computing. While these technologies are not yet commercially viable, they are advancing quickly and will eventually become thousands, if not millions, of times more effective than digital systems.

However, what’s most important to understand is that they are fundamentally different from digital computers and from each other. Quantum computers will create incredibly large computing spaces that will handle unimaginable complexity. Neuromorphoic systems, based on the human brain, will be massively powerful, vastly more efficient and more responsive.

Over the next decade we’ll be shifting to a heterogeneous computing environment, where we use different architectures for different tasks. Most likely, we’ll still use digital technology as an interface to access systems, but increasingly performance will be driven by more advanced architectures.

A Shift From Bits to Atoms

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to quickly and cheaply design products in bits before we got down to the hard, slow work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy more efficient.

Today we’re doing similar things at the molecular level. For example, digital technology was combined with synthetic biology to quickly sequence the Covid-19 virus. These same technologies then allowed scientists to design vaccines in days and to bring them to market in less than a year.

A parallel revolution is taking in materials science, while at the same time digital technology is beginning to revolutionize traditional industries such as manufacturing and agriculture. The expanded capabilities of heterogeneous computing will accelerate these trends over the next few decades.

What’s important to understand is that we spend vastly more money on atoms than bits. Even at this advanced stage, information technologies only make up about 6% of GDP in advanced economies. Clearly, there is a lot more opportunity in the other 94%, so the potential of the post-digital world is likely to far outstrip anything we’ve seen in our lifetimes.

Collaboration is the New Competitive Advantage

Whenever I think back to when we got that first computer back in the 1980s, I marvel at how different the world was then. We didn’t have email or mobile phones, so unless someone was at home or in the office, they were largely unreachable. Without GPS, we had to either remember where things were or ask for directions.

These technologies have clearly changed our lives dramatically, but they were also fairly simple. Email, mobile and GPS were largely standalone technologies. There were, of course, technical challenges, but these were relatively narrow. The “killer apps” of the post-digital era will require a much higher degree of collaboration over a much more diverse set of skills.

To understand how different this new era of innovation will be, consider how IBM developed the PC. Essentially, they sent some talented engineers to Boca Raton for a year and, in that time, developed a marketable product. For quantum computing, however, it is building a vast network, including national labs, research universities, startups and industrial partners.

The same will be true of the post-Covid world. It’s no accident that Zoom has become the killer app of the pandemic. The truth is that the challenges we will face over the next decade will be far too complex for any one organization to tackle it alone. That’s why collaboration is becoming the new competitive advantage. Power will reside not at the top of hierarchies, but at the center of networks and ecosystems.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

A New Age Of Innovation and Our Next Steps

A New Age Of Innovation and Our Next Steps

GUEST POST from Greg Satell

In Mapping Innovation, I wrote that innovation is never a single event, but a process of discovery, engineering and transformation and that those three things hardly ever happen at the same time or in the same place. Clearly, the Covid-19 pandemic marked an inflection point which demarcated several important shifts in those phases.

Digital technology showed itself to be transformative, as we descended into quarantine and found an entire world of video conferencing and other technologies that we scarcely knew existed. At the same time it was revealed that the engineering of synthetic biology—and mRNA technology in particular—was more advanced than we had thought.

This is just the beginning. I titled the last chapter of my book, “A New Era of Innovation,” because it had become clear that we had begun to cross a new rubicon in which digital technology becomes so ordinary and mundane that it’s hard to remember what life was like without it, while new possibilities alter existence to such an extent we will scarcely believe it.

Post-Digital Architectures

For the past 50 years, the computer industry—and information technology in general—has been driven by the principle known as Moore’s Law, which determined we could double the number of transistors on chips every 18 months. Yet now Moore’s Law is ending and that means we will have to revisit some very basic assumptions about how technology works.

To be clear, the end of Moore’s Law does not mean the end of advancement. There are a number of ways we can speed up computing. We can, for instance, use technologies such as ASIC and FPGA to optimize chips for specialized tasks. Still, those approaches come with tradeoffs, Moore’s law essentially gave us innovation for free.

Another way out of the Moore’s Law conundrum is to shift to completely new architectures, such as quantum, neuromorphic and, possibly, biological computers. Yet here again, the transition will not be seamless or without tradeoffs. Instead of technology based on transistors, we will have multiple architectures based on entirely different logical principles.

So it seems that we will soon be entering a new era of heterogeneous computing, in which we use digital technology to access different technologies suited to different tasks. Each of these technologies will require very different programming languages and algorithmic approaches and, most likely, different teams of specialists to work on them.

What that means is that those who run the IT operations in the future, whether that person is a vaunted CTO or a lowly IT manager, will be unlikely to understand more than a small part of the system. They will have to rely heavily on the expertise of others to an extent that isn’t required today.

Bits Driving Atoms

While the digital revolution does appear to be slowing down, computers have taken on a new role in helping to empower technologies in other fields, such as synthetic biology, materials science and manufacturing 4.0. These, unlike so many digital technologies, are rooted in the physical world and may have the potential to be far more impactful.

Consider the revolutionary mRNA technology, which not only empowered us to develop a Covid vaccine in record time and save the planet from a deadly pandemic, but also makes it possible to design new vaccines in a matter of hours. There is no way we could achieve this without powerful computers driving the process.

There is similar potential in materials discovery. Suffice it to say, every product we use, whether it is a car, a house, a solar panel or whatever, depends on the properties of materials to perform its function. Some need to be strong and light, while others need special electrical properties. Powerful computers and machine learning algorithms can vastly improve our ability to discover better materials (not to mention overcome supply chain disruptions).

Make no mistake, this new era of innovation will be one of atoms, not bits. The challenge we face now is to develop computer scientists who can work effectively with biologists, chemists, factory managers and experts of all kinds to truly create a new future.

Creation And Destruction

The term creative destruction has become so ingrained in our culture we scarcely stop to think where it came from. It was largely coined by economist Joseph Schumpeter to overcome what many saw as an essential “contradiction” of capitalism. Essentially, some thought that if capitalists did their jobs well, then there would be increasing surplus value, which would then be appropriated to accumulate power to rig the system further in capitalists favor.

Schumpeter pointed out that this wasn’t necessarily true because of technological innovation. Railroads, for example, completely changed the contours of competition in the American Midwest. Surely, there had been unfair competition in many cities and towns, but once the railroad came to town, competition flourished (and if it didn’t come, the town died).

For most of history since the beginning of the Industrial Revolution, this has been a happy story. Technological innovation displaced businesses and workers, but resulted in increased productivity which led to more prosperity and entirely new industries. This cycle of creation and destruction has, for the most part, been a virtuous one.

That is, until fairly recently. Digital technology, despite the hype, hasn’t produced the type of productivity gains that earlier technologies, such as electricity and internal combustion, did but actually displaced labor at a faster rate. Put simply, the productivity gains from digital technology are too meager to finance enough new industries with better jobs, which has created income inequality rather than greater prosperity.

We Need To Move From Disrupting Markets To Tackling Grand Challenges

There’s no doubt that digital technology has been highly disruptive. In industry after industry, from retail to media to travel and hospitality, nimble digital upstarts have set established industries on their head, completely changing the basis upon which firms compete. Many incumbents haven’t survived. Many others are greatly diminished.

Still, in many ways, the digital revolution has been a huge disappointment. Besides the meager productivity gains, we’ve seen a ​​global rise in authoritarian populism, stagnant wages, reduced productivity growth and weaker competitive markets, not to mention an anxiety epidemic, increased obesity and, at least in the US, decreased life expectancy.

We can—and must—do better. We can learn from the mistakes we made during the digital revolution and shift our mindset from disrupting markets to tackling grand challenges. This new era of innovation will give us the ability to shape the world around us like never before, at a molecular level and achieve incredible things.

Yet we can’t just leave our destiny to the whims of market and technological forces. We must actually choose the outcomes we prefer and build strategies to achieve them. The possibilities that we will unlock from new computing architectures, synthetic biology, advanced materials science, artificial intelligence and other things will give us that power.

What we do with it is up to us.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Rise of Seamless Computing

Rise of Seamless Computing

Some people have made fun of the fact that I said that the iPad might fail when it was announced, but I just looked back at what I said back in 2010 (before Apple fixed their Value Translation problem) and I stand by what I said in that article. Then I looked further back to what I wrote in 2009 about my vision for the future evolution of computing, a concept I call Seamless Computing.

I also just looked up the iPad sales data (note this chart is missing the first quarter’s sales data and Q1 is the Christmas quarter). You’ll notice that it did in fact take about two years for iPad sales to really take off (my prediction). When I highlight that this was BEFORE they fixed their value translation problem, I mean that this article was written when most people was calling the iPad a giant iPhone and was before they came out with the out of home (OOH) advertising showing somebody leaning back on a couch with the iPad on their lap. This single image fixed their perception problem, and these billboards came out as the product was starting to ship (a full three months after they announced the product). You’ll also notice in the chart if you follow the link above that the iPad has already peaked and is on the decline.

Unfortunately for Apple, the iPod is past its peak, now the iPad is past its peak, and the iPhone 6 will represent the peak for their mobile phone sales at some point as replacement cycles start to lengthen and lower priced smartphones start to be good enough for most people. Apple will likely to continue to win in the luxury smartphone market, but the non-luxury smartphone market will be where the growth is (not Apple’s strength).

Now, moving on from Apple, what it is interesting is that for the past couple of years we’ve been obsessed with smartphones and cloud computing, but it is looking more and more that the timing is now right for Seamless Computing to become the next battleground.

Cloud Computing won’t die or go away as Seamless Computing takes hold, but the cloud will become less sexy and more just part of the plumbing necessary to make Seamless Computing work.

Who will the winners in Seamless Computing be?

In 2009 I laid out my first ideas about what Seamless Computing might look like:

People’s behavior is changing. As people move to smartphones like the Apple iPhone, these devices are occupying the middle space (around the neighborhood), and the mobility of laptops is shifting to the edges – around the house and around the world.

Personally I believe that as smartphones and cloud computing evolve, these devices will become our primary computing hub and new hardware will be introduced that connects physically, wirelessly or virtually to enhance storage, computing power, screen size, input needs, output needs, etc.

– This would be thinking differently.
– This would be more than introducing a ‘me-too, but a little better’ product.
– This would be innovation.

Then I expanded upon this in 2010 by laying out the following computing scenario:

What would be most valuable for people, what they really want, is an extensible, pocketable device that connect wirelessly to whatever input or output devices that they might need to fit the context of what they want to do. To keep it simple and Apple-specific, in one pocket you’ve got your iPhone, and in your other pocket you’ve got a larger screen with limited intelligence that folds in half and connects to your iPhone and can also transmit touch and gesture input for those times when you want a bigger screen. When you get to work you put your iPhone on the desk and it connects to your monitor, keyboard, and possibly even auxiliary storage and processing unit to augment the iPhone’s onboard capabilities. Ooops! Time for a meeting, so I grab my iPhone, get to the conference room and wirelessly connect my iPhone to the in-room projector and do my presentation. On the bus home I can watch a movie or read a book, and when I get home I can connect my iPhone to the television and download a movie or watch something from my TV subscriptions. So why do I need to spend $800 for a fourth screen again?

Now, along comes a company called Neptune that is building a prototype of a computing scenario similar to one that I laid out in 2009 and is raising funds on IndieGogo to make it a reality. The main difference is that I had the smartphone as the hub, where they have a smartwatch as their hub. My biggest concern about making the smartwatch the hub would be battery life. Here is a video showing their vision:

But Neptune isn’t alone in pushing computing forward towards Seamless Computing. Microsoft is starting to lay the foundation for this kind of computing with Windows 10. The wireless carriers are investing in increasing their ability to make successful session handoffs between 4G LTE and WiFi without dropping calls or data sessions, and Neptune, Intel and others have created wireless protocols that allow a smart device to send video output to other devices.

Will Seamless Computing be a reality soon?

And if so, how long do you think it will take before it becomes commonplace?

My bet is on 2-3 years, meaning that Neptune may be too early, unless they do an amazing job at all three pillars of successful innovation:

  1. Value Creation
  2. Value Access
  3. Value Translation

Keep innovating!

Image source: Wired


Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Napkin PC and Other Innovative Ideas

The Napkin PC and Other Innovative IdeasI came across the web site for a Microsoft-sponsored alternative computing form factor contest a few years ago, and even still I must say there were a few interesting ideas that might help people begin to see the future of computing.

The most interesting concept was coincidentally the winner of the contest, the Napkin PC.

If you follow the link above you’ll see the artist conceptions and get a good sense of the vision. The gist is that some of the greatest advances in the world have been conceived on the lowly paper napkin in restaurants and coffee shops all over the world, so why not take the napkin high tech. Just don’t try and wipe up spilled coffee with it.

The concept consists of a rack to contain and potentially recharge the OLED “napkins” and the styluses that go with them. These “napkins” provide a computing interface much like a tablet computer and can be pinned up on a board or connected together to make a larger display.

The concept is targeted squarely at the brainstorming, ideation, collaboration space and if the designer can ever manage to pull it off, I think it would be a welcome tool for organizations everywhere.

So what is your vision for the future of computing?

Are there other sites on this topic you think others would find interesting?
— If so, please add a comment to this article with the URL

Build a Common Language of Innovation

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.