Heart Halo is now in the App Store!

Our team has spent the last year developing a wearable application called the Heart Halo with scientists and physicians at the Mayo Clinic (you can read about it here).  After months of beta testing with users, the app is now available for Apple Watch (but lives on your iPhone) on the iTunes app store.  

This initial release is a pre-productized version, meaning there is still a major UI/UX design push to happen.  But the engagement with beta testers was so strong we decided to push out this version to gather a broader set of responses before finalizing and pricing this for consumer use.

This review from one of our testers sums up the opportunity we saw to bring a killer app to the wearables market, which has not yet created a way for users to understand and use their heart rate monitors:

“To be honest the halo app is the only way to make sense of my Apple Watch data. Otherwise walking around counts as calories and active minutes while the halo app motivates me to push myself to work out and elevate my heart rate.”

 If you have an Apple Watch and want to give the free version a try, download it from here.

Chasing the Killer App for Wearables

The new generation of heart rate (HR) wearables pushed the fitness tech sector closer to a medical grade enterprise. But for the average person, without personalized physician interpretation and directives, the data is almost meaningless. This presents a huge opportunity for what I believe is the near mythical killer app for wearables. 

It's true that bio-sensors in many of the lower-end wearable devices are sub-par. But if you're willing to spend the $200+ on a Fitbit, Garmin, or Apple Watch, you are acquiring a much higher fidelity picture of overall health than the standard Steps-based analytics.

In fact, there's a pretty common argument (led by the American Heart Association) against the value of a Steps-only fitness metric given that walking is not strenuous. More critically, overall heart health and the prevention of heart disease really depends on the measure of Intensity during exercise, which means habitually pushing your heart rate up to physician-prescribed thresholds for your age, gender, and body type.

And this is where it gets complicated.

It's one thing to watch your steps climb toward the 10K milestone as a metric of performance.  It's way harder to track a recommended HR level, and duration, in order to meet personalized optimal health thresholds

Transmitting the meaningfulness of heart data is a challenge that has been taken up by Dr. Bruce Johnson and his lab team at Mayo Clinic. With over a decade of research in wearables, and a legacy of field work focused on studying the limits of human heart and lung performance, Dr. Johnson has a passion for technology that can meaningfully communicate a person's moment-to-moment fitness in an actionable way.

Here's where the much-maligned Apple Watch actually takes a step above the rest of the wearables field. 

Mayo Clinic is one of many top tier US hospitals that has been trialing the Apple Health Kit (HK) platform that integrates healthcare and fitness apps, allowing them to synchronize via iOS devices and collate their data.  Coupling HK with the Apple Watch's (not-perfect-but) highly rated heart rate sensors, and the paired screen of the iPhone, there was suddenly the opportunity to envision a generative health identity which could show the user exactly where they were in relation to their Mayo-prescribed intensity thresholds, both daily and cumulatively.

Last year, I wrote on LinkedIn about our company's (ORA) vision that in the future, complex and big data flows would signal in dimensional objects (think clouds, or flowers which are essentially produced by complex data inputs). At that time, we were just releasing the beta SDK of our HALO technology. You can see that now-evolved SDK, here.

About 6 months later, we began working with Dr. Johnson and his team to build the first dynamically responsive health identity, called the Health HALO.  And this week we are launching a closed beta test of the software. Below is the onboarding video guide that shows the HALOs and explains how they work:

If you've skipped the video, the short description is: ORA and the Mayo Clinic have developed a digital system that allows users to track their daily progress towards optimal heart health by deploying Mayo's proprietary algorithms and embedding them in the Health HALO. The user's cumulative performance is then tabulated for classification in a four Tier national ranking system of Bronze, Silver, Gold and Elite, also programmed by Mayo Clinic. (You can see the performance criteria for the HALO colors and Tiers, here.)

I'm not just the CEO of ORA, I'm also the lead product tester.  And I can say without any exaggeration that this system has totally transformed my approach to fitness and personal health. Where I used to lift weights or hit the stationary bike and hope that I got enough cardio to be considered "healthy", now I engage the HALO app and actively watch it grow with my energy and oxygen, literally, until it blooms into the color-filled HALO that I aspire to. I rarely go to sleep without building a HALO that at least hits what's called a 'rose' level performance:

AAEAAQAAAAAAAAi0AAAAJDBlNjg5MTBmLWM2NjQtNGNmOC1hM2QyLTE4NTU2NTgyNzZhYQ.png

 

This is the blazing sun I earn after going for a run or crushing the elliptical at the gym:

 

Our plan after responding to the beta results is to roll out a device agnostic consumer app as well as licensing to wellness platforms.  We also believe there is a huge opportunity here to engage payors who might use this technology to reward responsible behavior. The combination of a bio-dynamic personal health identity that can be pegged to insurance rates, and possibly rebates, is the closest thing to a killer app for wearables I've seen.

But of course I would say that. I'd love to hear what some of you think.

HALO SDK (beta)

ora_sdkblue2.jpg

ORA's first product is called the HALO. It is dimensional data visualization. Think of it as the new pie chart. 

The HALO is a 3-dimensional object that maps complex data flows into an aggregate, and intuitive, picture that signals the performance of an entity or a person.  This is achieved by writing data calls that populate up to six "vertices" of the HALO (size, color, complexity, speed, brightness, and wobble).

The HALO SDK gives developers access to the ORA API, which generates HALOs from their specified data calls.

If you're a developer who works with big data sets and who wants to get into building dimensional visualizations across multiple vertices, send us an email and we'll get you set up with the HALO SDK (beta version).  

Chasm jumping

crossing-the-chasm.jpg

ORA is developing data systems that will give our customers the ability to store, navigate, and visualize their data.  The first product we designed is a 4-component, 3D module called the protostar system, which is navigable through time and space, and of which the HALO is one component.  

We released and patented the HALO as a way of getting to market faster, and learning from our early adopters about the way users perceive and interact with this new kind of dimensional data viz.

With the imminent release of our (beta) HALO SDK, it marks our move from visionary Innovators  (as introduced by Everett Rogers' diffusion of innovation model), to Early Adopters:

By moving past the visionaries who first engaged us and gave us an outlet and feedback loop to iterate the product, we are now pushing into a SaaS pricing and business model that will involve incrementally larger and more risk-averse customers.

One of the ways this is described in is the 3 Chasm Model, which is a core plank in the business philosophy of Cartezia's Triple Chasm Model

The 3 chasms defined by Cartezia's Triple Chasm Model cover the transition from concept to demonstrator, demonstrator to early product, and early products to volume products. In our experience, most market failures do not occur because of problems with technology, management or funding, but arise from the failure of companies to recognise where they are in this development cycle and understanding the different skills and resources required to cross each chasm.
Crossing Chasm II is about turning the proven concept into a product or service with a viable business model. Historically, this was an area that Venture Capital was supposed to concentrate on, consistent with its mantra of high risk-high return...

Engineering God Mode

                                                

                                                

Take a sip of this Kool-Aid, and you might be convinced a wave of new technological innovation is upon us:

With  advances in the fields of mobile computing and data processing there is now the potential for a new kind of programming to transform the way human beings identify, exhibit, and explore themselves, and the companies, organizations, and nations they populate.

This is the advent of an evolutionary moment which will be brought on by a new kind of coder/designer. These creative engineers are quietly advancing a new computer language that is best described as object-based, or “generative,” code. And while they are well-known in the tech/design field, Silicon Valley technologists and the investor class are almost universally unaware of them.  

We shouldn’t be surprised.

Like most scientific and technological communities on the verge of paradigm shift, the Valley’s thought and investment leaders have no idea what is coming next. They are too busy trying to benefit from the current status quo, which they have essentially created. And, as Thomas Kuhn noted in his Theory of Scientific Revolutions:

Almost always [those] who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change.

And, as we have learned, old paradigms die hard.

 

| towards a generative code platform |

What is object-based generative code? Put simply, it’s code that dynamically generates and morphs ‘objects’ through a system of inputs which are either controlled by the viewer (directed), or from external sources (passive). So, at its most sophisticated levels, these are programs that animate objects through live data flows; such as bio-signals, stock market indices, sonic beats, language in text messages, weather changes, and geo-locational signals. The people who write this code are described as computational designers, or, as I know them: code artists.

To give you an example of how generative code works: check out the video below, which is a process demo by visionary code artist Reza Ali of a generative app he designed that he describes as “interactive (and) audio-reactive.”

The key here is that the dynamic motion and mutations are responding to live audio signals. This is music + code = free-form object mutation.

Josh Nimoy is another elite computational designer. He was hired by director Joe Kosinski to write code that generated special effects for TRON Legacy (design-directed by my ORA partner GMUNK), considered one of the most stunning achievements in modern GFX. Below is an excerpt of some of the code he wrote for that project:

 

| tech rev 2.0 |

With advances in the collection, processing, and analysis of data, we have turned a civilizational corner. We can now make sense of a multitudinous system of decisions, and their tethered outcomes, which in the past, were far too complex, and seemingly chaotic, for us to extract any tangible benefit.

That is huge, in evolutionary terms. You can’t truly change anything, either on an individual, institutional, or global level, unless you can ‘objectify’ (or ‘see’) the entity that needs changing. It’s the cornerstone of all healing and personal transformation programs.

It’s called the overview effect. Or, in tech terms, God mode.

But this won’t happen if we limit our work to capitalizing and developing those systems that manipulate and control the data flows to return the most banal, and insidious, behavioral insights.

If we unleash deep data and develop technologies that allow it to signal to us the hidden intelligence in our human, geological, and financial etc. systems, it will bestow a new level of self-awareness and self-knowledge upon our civilization. This means learning to see the hidden messages in the data, instead of writing code that gives us pre-determined outcomes demanded by a myopic, and essentially mercenary, market.

But we also need to develop a set of tools to communicate that intelligence to us. And those will come in the form of generative visualizations: computational objects, environments, and, eventually, worlds, that take complex and seemingly unrelated data flows and aggregate them into “sense-making” technologies.

Evolutionary, paradigm shifting applications that finally free us from the poison pill of human governance that has kept us in the shadow of our true potential. No longer can decisions — political, economic, medical, military, social — be made based on human whims, caprice, biases, or opinions… but, rather, from the nexus of billions of lines of data which can point us to optimal behaviors.

This could revolutionize the way the World Bank lends money. How medicine is priced and distributed in developing world markets. How we develop an accurate and up-to-the-minute reporting mechanism on our survival as a species.

How is it that we are not already making this the most critical objective of our massively endowed technology sector?

Because most of its leaders are stuck in rigid economic systems and ossified ways of seeing.  I know because I spend a lot of time talking to investors and technologists who are rooted in the old paradigm. They cannot not grasp, nor visualize, an infrastructural shift away from text-based computing. They don’t know what generative coding is, how it works, or that there is even the possibility of mapping live data into dynamic objects.

Not surprising, considering the vast amount of capital the major VCs, and the market in general, has invested in text-based social networks and search platforms.

 

| the next dimension of big data |

Ironically, if Silicon Valley is slow to catch on, the mainstream public is increasingly aware of the kind of future that awaits through generative systems. That’s because the biggest source of funding for these code art projects comes from Hollywood and the motion picture industry. Films like Minority Report, TRON: Legacy, and Prometheus have plot lines that prominently feature generative code-driven holograms and UI/UX interfaces. The design departments for these films, which create functioning tech, are budgeted in the tens of millions of dollars. Yet the technology sector is comparatively underfunded when it comes to engineering a future that is both beautiful and utilitarian.

                                        > Prometheus hologram

                                        > Prometheus hologram

Imagine:

Cities and countries would no longer be depicted solely by their geographic dimensions, but as dimensional objects, formed by all of the data that is flowing out of them. Companies’ online representations would no longer be 2D websites, but rather explorable worlds woven together by the data of the people, performance metrics, and products that they have been built upon. Doctors would no longer have to double as high-level statisticians to read the reams of graphs and numbers that run off their various tech. Instead, they and their patients will view heart and other bodily system status through actionable, bio-mimicked visualizations.

But the killer app, for me, of this evolutionary thrust resides in the social networks and digital identity.

With a new “sky layer” — a data visualization platform which sits atop the social and search realms, powered by generative code — users of Facebook and Twitter would not longer be compartmentalized in some post-Tower of Babel reality in which they are unable to viscerally communicate with anyone outside of their linguistic group.

Instead, they’d experience a dimensional realm, coded in a universal, object-based language in which their ‘profiles’ and identities are based on their biographical and moment-to-moment data.

There is a growing sense that this evolution in computing needs to happen. Human beings must create technologies that harness, alchemize, and output their data so that we can get a view of our world and the impact our moment-to-moment actions have on it.

After all, self-knowledge is the essence of human identity, and the next technological revolution must offer unprecedented opportunities for us to know ourselves, and our world, as we never have.

 

[Stephen Marshall is the co-founder and product lead for ORA, a Seattle/London-based start-up innovating in the realm of dimensional data visualization, and portfolio company of the DataElite accelerator.]

 

This is your brain on Burial

ORA's co-founder and chief systems architect being the coolest cat in tech:

Computational artist Peter Crnorkrak shares What Need Angel.

Crnorkrak describes What Need Angel as “a synesthetic transcription of the brainwave response of a five year old boy while listening to music.”

Using an electroencephalograph (EEG) headset, he first recorded the boy’s responses to music in a darkened room and then his responses to a music-less visual stimuli, mapping the corresponding neural responses in a video.

The best part? He used the music of Burial, specifically ‘Loner’, ‘Kindred’, ‘Rival Dealer’ and ‘Come Down To Us’. Watch the video below and head to his Vimeo for a full accounting of his methodology. [H/T Hyperdub]

http://www.factmag.com/…/04/13/this-is-your-brain-on-burial/

the future of big data viz

Beautiful articulation of the move to big data biz and object-based computing by Roambi's Quinton Alsbury:

“Data is probably the most intimidating type of content for most end-users outside of analysts to engage with,” says Quinton Alsbury, Co-Founder and President of Product Innovation at Roambi, the winner of the Design for Experience award in Bringing Order to Big Data. “Providing users with a simple and engaging experience that presents the information in an aesthetically pleasant way—and that helps guide them through how they should interpret it via a highly interactive UI—can dramatically improve their confidence. Very often, you find that once this ‘a-ha!’ moment happens for users, they begin clamoring for more and more information.”

“We think of data as content, no different than classic unstructured formats like video, images, copy, etc. Throughout history, different types of content and the mediums used to capture and present it have dominated the culture both in business and in general. We believe that data is quickly becoming the primary content of the 21st century, so there is a huge opportunity to create the tools and the medium to help people interact with it.”  

"I believe there will be a whole new creative class that forms around taking the massive amounts of data that are being generated, and helping transform it into something that is as easy to engage with as video."