After nearly 140 years , headphone jack has finally been put to rest. Apple today launched the iPhone7 series of smartphones and with it came the announcement that it will no longer have a head phone jack. Instead the headphones can now be directly plugged into the lightning connector – a risky stance esp. for those Bose headphone loving music fanatics who would rather have a universal headphone that works with desktops and phones. And that exactly is the problem with changing a universally accepted connector. Just for a geeky analogy, this would be like Cisco changing the RJ45 connector for Ethernet on their new line of switches. Apple in its defense does plan to offer an adapter for “traditional” headphones. But having used the Mophie juice pack that offers a headphone adapter to “seamlessly” use the case along with a headphone, I must admit, it is yet another rather rigid cable to carry along. It needs to be seen how users will accept the idea. Early twitter reactions show that the feelings are mixed.
Having said that I am very excited about AirPods . Entangled cables were never my favorite things to carry. Nor was I terribly excited about the bluetooth wireless headsets available today. Again the reactions to the AirPod launch has been mixed. The fact that there is no wire connecting the two plugs together is seen as a major concern, esp. with the threat of losing one of them. Apple has tried to address it with a rather sleek looking casing that also acts as a charger. I must say of all the products and features announced today, AirPods are the ones that got me really excited.
So while the verdict on how well the “new” headphone jack and the AirPod will be accepted is still pending, this year’s keynote did bring more excitement than the last couple of years to consumers. And I personally think I would at least buy one product out of all those that were launched. No prize for guessing which one!
A couple of months back, I enrolled myself in a Fintech Course offered by MIT and GetSmarter . I definitely recommend this for anyone who is a avid enthusiast of FinTech and is really passionate about being part of this emerging field. Although, in the past few years, I’ve interacted with several of the products and technologies that have started in the area, mainly in the payment, personal finance and remittances space, I’ve never really explored beyond being just a consumer and an avid enthusiast. As it turns out, there was a lot more to FinTech than just money and payments.
I was particularly intrigued by the digital identity space and the advances that have taken place in the field. Albeit being slow in adoption, through my course I realized that it is a field that has seen the emergence of several startups and perhaps gotten a lot more interest over the past few years.
A quick search on the google search stats shows that the trend on digital identity have been on the rise since the turn of this year. There are several aspects attributed to this. Emergence of bitcoin is seen as one of the primary drivers. Bitcoin uses an underlying technology what is called as Blockchain. Blockchain in a layman’s terms is a transaction log technology that was originally built as a public ledger for all bitcoin transactions, maintained as blocks. Its decentralized and unalterable nature made it an immensely popular technology among financial geeks. Soon it grew from the public permissionless bitcoin blockchain to private and permissioned blockchains built for enterprise. In a nutshell, this figure describes blockchain.
Digital identity like many other themes in FinTech has found quite an interesting partnership with blockchain. There are some aspects of blockchain that can prove to be extremely useful in digital identity. A primary roadblock for the widespread adoption of digital identity is trust. Consumers and businesses have to trust the privacy and security of any digital identity solution. Know Your Customer (KYC) as banks call it, identity is verified by every bank every time there is a new transaction between a bank and a customer. The overhead associated with it in a digital era can possibly be be solved by a private blockchain based identity system .
Now imagine a digital world where a consortium of such private blockchains can prove the identity of a person. Imagine also a world where you now don’t have to use a paper form of identity to start an account, move across borders or even simply enter a bar. There are several challenges to making this a reality. My previous post on the regulatory challenges is just one of them. As I finagle through this world of digital identity, I plan to keep posting more.
Digital identity is seen as an integral enabler of the Internet of value. Over the past 5 years, digital methods have increasingly become the preferred means of transactions, that include payments, remittances etc. Statistics point out that in a traditionally cash friendly country such as India, digital transactions have exceeded the cash transactions in FY2015. Studies have analyzed positive impact of digital identity on GDP, tax and employment. A recent study by Boston Consulting Group points out that digital identity can bring governments across the globe up to $50billion in saving by 2020. This growing prevalence of digital identity brings along the need for regulatory measures to ensure that the information is handled responsibly by both private and public sectors.
While trust is a primary factor for digital identity to succeed, there exists a need to validate or ensure an identity claimed by individuals involved in a transaction. Several regulatory bodies have been initiated by countries across the globe to regulate this validation. In United States, National Strategy for Trusted Identities in Cyberspace (NSTIC) has taken up the initiative to create secure online identities for Americans. EU formed the eIDAS for the development of regulations on digital identity and UK started the Identity Assurance Program (IDAP) which is a government certification program to authorize private sector companies to act as digital identity providers.
Regulations and the initiatives started so far in this space are good for the adoption of digital identity. They cater to the two important reasons for the need to regulate, viz uncertainty and public good3. Public need to be assured that their identity is safe with the identity holder. They also need to be confident that, in their transaction with their peers, they have a valid and trustworthy digital identity to validate. Hence in my view, the regulatory initiatives so far address the primary concerns of the public.
However, the problem also arises when there is a need for cross border transactions. Paper identity has clear boundaries; like a driver’s license in your country being exclusive to your country. They are also regulated well by the government with valid background check. Digital identity on the other hand has loose boundaries in the Internet world. Also, different countries have different visions on what an identity is and how they can be regulated, albeit it all ultimately gets accessed globally. The challenge for the growth of a venture in Digital Identity will be due to the fact that there is no common consortium that can regulate identity globally.
Challenge also lies in the fact that banks continue to be the party responsible for the identity, albeit it being issued by the states and countries. Although there have been measures to create a private sector worldwide identity and authentication system (FIDO – Fast ID Online), the adoption has been slow.
Digital Identity is the key to the growth of digital economy and Internet of Value. A comprehensive regulation carefully crafted to avoid too-big-to-fail ventures, such as the passport systems regulated by ICAO (International Civil Aviation Organization) ICAO (International Civil Aviation Organization) can go a long way in enabling a paperless digital world in the future.
Being connected has become an integral part of human evolution in the 21st century. Be it through the laptop or smartphones, or via smartwatches or the traditional landline phones. Cisco reports that by the end of 2016, global IP Traffic alone will hit the zettabyte threshold. Automatic refreshes and downloads mean that users are connected 24X7 across the globe. Digital connectivity has become more of a utility than a luxury in almost all continents.
Tracing back the evolution of connecting “things” has always intrigued me. Back in the days of personal computers, when the Apples and the Microsofts of the world fought to capture the markets, the idea of connecting “things” was restricted to desktops. And Ethernet cable plugged into the RJ45 connector on your desktop was how you connected to the “web world”. Then came the laptops. The need to fully utilize its mobility was perhaps what lead to the connectivity over the air or the WiFi. With the laptops and desktops hooked on the web world, the wise men from the mountains began to explore the possibility of connecting phones. And thus emerged the world of smartphones. And rest is history.
The term Internet of Things was coined back in 1999 by a British entrepreneur named Kevin Ashton, the founder of Auto-ID lab at MIT. In a layman’s terms, it can be viewed as a way to connect the physical world with the web world. Kevin defined it as a way in which the computers could sense things for themselves, before the humans told them what to do. In other words, a constant communication between the computers of the world to create a society amongst themselves.
Over the years the term took a different dimension of its own. The word computer began to take different shapes and forms – from smartphones, to smartwatches, from smart grids to smart cities, what all and what not. My quest to explore the intricacies of this vast subject came as a fascination during a company pitch session which I had participated in. Smart Cities were the theme of the pitch and that got me to delve deeper into the subject.
Over the next few sessions, I plan to write about different aspects of IoT. Each week I plan to cover one of the major areas of growth, from consumer applications, to corporate use cases and city planning. While I’m not a master at this field, I plan to write as I learn and hopefully paint an exhaustive picture at the end of it all.
Advances since 1970 have tended to be channeled into a narrow sphere of human activity having to do with entertainment, communications, and the collection and processing of information
He attributes this “stagnation” in innovation to rising inequality, stagnating education, an aging population, and the rising debt of college students. He gives several examples to these in the form of household necessities such as television, refrigerator, automobile and medicine all of which have not progressed any further since 1970s, albeit being slightly different in form factor or shape.
He is also very true about the information technology advances, in the form of personal computers that revolutionized the 1980s, the iPods that took the music industry by storm in early 2000s, the iPhones and the smartphones that followed in 2007 changing the way people stay in touch. There has been an immense change in the way news is broadcasted, in the form of the current social media giants such as Facebook and Twitter. Today a person can know what is happening around the world and communicate with anyone across the globe by sitting on his couch at home. With that comes the feeling of being too comfortable with lives. Back in the “golden” era as mentioned in the book, between 1870 and 1970, comforts were limited and handwork was the key. Innovations came as a natural consequence of a quest for a more comfortable living.
But something that the book does not consider as a factor for slowing down of innovations – have we reached a point where life is way too comfortable to stop thinking about innovations that can make it even better? Just a food for thought on a Monday night….
Personal assistant gadget space has always been a niche market for big and small firms all around the globe. It seemed like a nature transition from smartphones and tablets,when the world started to rely heavily on the connected web of information. According to a survey conducted by PewResearch Center in 2015 there has been an incredible increase in the use of smartphones across the globe, and more specifically in the developing nations with an average increase of around 16 points over the last two years. What that means is, people look at their smartphones as they open their eyes in the morning and they refuse to let them go until they fall asleep at night. And hence as the world starts to get busier, it only seems natural that they seek for a more personalized “controller” of their life.
Broadly, there are two methodologies adopted by companies who are regarded as players in this market. Some take the route of apps built into the the smartphones that can alert, remind, control and automate your life. Apple’s “Siri”, Microsoft’s “Cortana”, Facebook’s M and Assistant.ai are just a few examples of the so called “embedded” assistants.
Then there are others such as Amazon Echo and the sweet Alexa backend which is a standalone gadget. Cubic is another example of a standalone gadget that is trying to enter this playground. Now in my view these are gadgets which serve as a “speaker” with a personal touch to it. In fact Amazon Echo sales figures were compared alongside the traditional speakers such as Bose and Sony.
So what does it mean to have a personal assistant? He/She should be alongside you at all times – perhaps one of the reasons why “embedded” assistants are more heavily used than their “gadgetized” counterparts. Aido, the robot promises to do just that to a certain degree. While it might be an overkill to think about robots walking alongside you on the road, as the science fiction books and movies so eloquently portray. Aido can start as your partner in crime at one location – be it home or office.
Launched two days ago at Indiegogo , Aido definitely has promise, raking up an incredible 180% of their initial target funding in just 3 hours.
Watch out for more as I wait to get my hands on one of them…
There was a time when Google and Sony based their innovations and developments on every new product that Apple launched. Be it the first iPhone back in 2007 or the iPod back in 2001 or even the revolution in the digital music through the iTunes. It is true that the aura of Steve Jobs enthralled the crowd at every launch. But it is also important to note that these launches were followed with equal enthusiasm by their technology rivals. And it was not the aura of Steve Jobs that kept them on their toes, but instead it was his vision! The rest of the world just played a second fiddle to Apple – from the Androids to the rebranded Walkmans. Just like online search is now known as google search, digital music came to be knownn as iTunes and portable music players came to be known as iPods. In fact for a period of time smartphones were called the iPhones. As harsh realities of life started to take their toll on Jobs, things started to change… Over the past 3 years, the rest of the world has caught up with them. In fact some have even started to move ahead of Apple. Here are a few instances of those.
Case 1: Apple iWatch
Although it was being touted around for several years, the idea of smartwatches really started to gain momentum in early 2013. Rumors were several. Apple didn’t do enough to quell those rumors as well, with the 6th Generation iPod Nano with a strap around it showing promising signs. But in my view, what really made Apple take a step backwards, is the way they discontinued the “watchlike” nano and reverted back to the traditional “classic-style”. Not only did it send a rather indirect message for a smartwatch in the works, but it also set the think-tanks of the Samsungs and Googles to start working on one. And whats more, it took almost 3 years for them to announce the iWatch and another 6 months before it will finally be launched. By then, the Pebbles, the Motorolas and the Samsungs would have already reached their second and third revisions of smartwatches.
Case 2: iPhones with bigger screens
For long Apple avoided this question by stating small screens are what the consumers want. Well, they got the answer when the 5″+ displays by Samsung, LG and Nexus started to gain market share over the iPhone 5s. Even an Apple fanatic like me, started thinking about switching over to the “dark” side due to this. As tablets and phones started to converge into the “phablets”, they finally caved in and came out with the bigger screens. Yet another case of Apple trying to catching up.
Case 3: iTunes radio
Although the revolution in digital music was started by Apple, enough for it to dominate the streaming music space, they fell behind once again, mainly due to misinterpreting the consumers needs and wants. The result – an iTunes radio, way after Pandora, Google and Spotify took away all the market share.
Case 4: iWorks on iCloud
Yet another example were the emergence of cloud and its popularity was embraced rather late, that even Microsoft were way ahead when Apple finally did take off.
Not all of these can be blamed on the post Steve Jobs era. Some of those, esp. the bigger screen iphones can be atrributed to Jobs’ reluctance to accept the popularity of Samsung smartphones. But what Apple lacks now is a true visionary, who continously strives to turn every new product into something “magical”; that aura which enralls the audience to believe that everything that Apple brings out is revolutionary; a person who commanded respect and strove for perfection. I must admit the last few WWDC and product launch sessions were rather bleak in terms of the products and features. It almost felt as though they were still hanging on to those golden ages between 1997 and 2011. And it is rather painful for a diehard fan such as me to fathom!
Last Wednesday, I happened to listen to an interesting talk on “The Cloud” by Matt Chastain ( @packetB0y ) a cloud SE at Cisco Systems, during the one-day Cisco Networkers event at New Brunswick in New Jersey. As he started to introduce the “actors” in the cloud and the types of cloud, there emerged a common theme, once restricted to just the SPI Model for the cloud computing services – “as a Service”. As the relevance of cloud and its associated offerings started to grow, so did the definition of the term, aaS. It began to be more widely used for anything cloud related. Let me begin with three of those original terms collectively known under the SPI Model
SaaS – Software as a Service
Knowingly or unknowingly, this is perhaps the most widely used service, from the common consumer models such as the web operating systems like chromeOS, and music delivery platforms like Pandora and Spotify to more business related models such as Citrix GotoMeeting and Cisco WebEx. Any application that is managed by a third party delivered across a network to the clients can be grouped under this service.
IaaS – Infrastructure as a Service
As data started to become more and more prevalent and its storage and analytics began to gain importance, so did the need for cheap computing power. The ability to provide a cost effective means to manage applications, data and messaging systems became a niche that organizations would much rather pay to use than setting up their own. Microsoft Azure, Amazon Web Services (AWS), Google Compute Engine (GCE) provided just that. Often times, these services are also availed when an organization has a requirement for temporary compute power. Investing in a plethora of services, network infrastructure and storage, during those times can be extremely expensive, when IaaS platforms can provide a cheaper alternative to setup and tear down virtual infrastructure on a case by case basis.
PaaS – Platform as a Service
This was perhaps an after-thought of SaaS to provide a solution over the cloud and is often used interchangeably with SaaS in a lot of use cases. It could vary from common tools such as office 365 or iCloud to a more exhaustive suite such as a webOS such as ChromeOS. PaaS aids in collaboration across geographically spread out teams and organizations.
As cloud became more and more prevalent, several other solutions and services started to be termed under the same umbrella. I must admit, every one of these terms have overlaps and one can argue that they can still fit into one of the original three. But marketing has its own ways to portray distinctions. Let me list down some of the terms I happened to hear during the talk,
DCaaS – Data Centre as a Service
This refers to hosting an entire datacenter for organizations more as an extension to IaaS. Companies such as Sun Gard, provide disaster recovery sites to various organizations that do not want to spent the money to invest on data centers which will be used only during those rare occasions such as Hurricane Sandy.
ITaaS – Information Technology as a Service
Also used alongside “Help Desk as a Service”, this was a term coined recently to market those “service oriented” companies in the past, including “call centers”.
More followed suit as MaaS (Metal as a Service for automated bare metal provisioning or Monitoring as a Service for data center monitoring services), NaaS (Network as a Service), DRaaS (Disaster Recovery as a Service) and CaaS (Communication as a Service) started to be used more widely. Soon came the concept of “Everything as a Service” (EaaS) or “Anything as a Service” (XaaS). As it starts to get more and more “cloudy”, I foresee an emergence of several more of these terms and soon “as a Service” will become a household term. Let me leave you with an interesting talk on “LaaS – Life as a service”. Make sure you turn on the subtitles.
There was a time when Indian technology industry was thriving with software services based companies and BPOs. The likes of Infosys, Wipro Technologies and TCS were major players in both these sectors earning a name amongst the world leaders as delivering the best of services in the software world. In this race to be the best, what got left out was innovation. While the facebooks and the googles and the twitters emerged as the giants in the technology world, India seemed to be content with providing those valuable services in the backdrop. The blame cannot be entirely attributed to the Indian industry alone. There is that faction of Indians who flew across the sevens seas to a land known as the silicon valley to set up shop and be recognized as the entrepreneurs of the west. Some statistics, aptly pointed out by Neesha Bapat in her article in Forbes state that, in 2012, more that 14% of the startups in the Silicon Valley were started by Indians, an astounding rise from around 7% in 1998. When Satya Nadella was appointed as the CEO of Microsoft, amongst the myriad of newspaper articles heaping praises and pointing out success stories, one article caught my eye. I was initially drawn to it through its title – Nadella as Microsoft CEO, a slap in the face of Indian System . Although I must admit that a lot of what is stated in the article can be counter argued through some of the successes that Indians have had in Indian soil, those examples are certainly far and few.
Then came the revolution. One which Shashi Tharoor, a renowned author and columnist, puts as “soft power” started to emerge. The penetration of cell phones, and the emergence of smartphones along with the “phablets” made information a commodity. Internet started to be seen as a regular household utility just like electricity and water. And with it came the thought for innovation.
Back in April 2012, Sijo Kurivila George and Kris Gopalakrishnan, the co-founder of Infosys, started a venture known as The Startup Village with a goal to keep the innovators back in India. The aim was to successfully launch 1000 companies in India by the year 2022. Add on the outbreak of crowd funding through the likes of Wishberry and Ignite Intent , the stage was set. And the play began…
In the recent CES 2014 hardware battlefield, there was one such startup from a small town in India, Kochi. This in particular aroused my curiosity, having personally spent a significant part of my life till date in this metropolitan town by the backwaters, surrounded by a wall of greenery. Fin as the founders called it, was a prototype that captured the wildest of my imaginations. The final version of this product looks even slicker, in the form of a ring worn around the thumb – a gadget that can be used to control anything that can connect via bluetooth by the flick of a finger. It was available to preorder for $99, a couple of days back in their website. The link has mysteriously disappeared now. The prospects are infinite. A little exaggerated, I know. But I always get excited when a new gadget comes into the world, as much as I meet it with a pinch of skepticism.
But what makes me even more excited is the fact that this now has opened doors to many more of those innovators in India to stay back and plant the seeds of their dream startup in those pockets of technology, those silicons valleys that are scattered across the peninsula – along the backwaters down south to the land of fortresses in the north. The future looks hazy today, but it definitely seems bright from where I look!
When most of the top consumer product manufacturers where burning the midnight oil to solve the smartwatch mystery, that has been a growing trend over the last few years, one company quietly sneeked in a product in 2013 that took the world by surprise. Rather unheard off in every way, Pebble started off as a dream in an upcoming crowdfunding organization called Kickstarter . When traditional venture capitalists failed them, Pebble looked towards crowd funding in April 2012. And soon it turned into a cult raising around 10 million from around 70,000 users around the world, becoming one of the highest crowd funded projects till date. It wasn’t long before they started mass production with a release date of January 2013. With around 200,000 units sold by the end of 2013, Pebble announced their second iteration of the smartwatch – The Pebble Steel at CES 2014 in Las Vegas.
But we are not here to talk to Pebble and its growing success, when companies like Apple and Samsung are still struggling to enter this market of watches. Rather I wanted to focus on what made Pebble possible – Crowd Funding. At least 4 of the gadgets that won the CES 2014 Awards were crowdfunded, with Oculus Rift “Crystal Cove” , the most notable amongst them. Reports suggest that Eureka Park at CES, the launchpad for startups, saw an increase of around 40% in the number of startups with exhibits. And what more, there was even an Indiegogo Zone at the venue with hoards of hardware startups flaunting their jaw-dropping ventures.
Starting a company has traditionally been expensive, when it involves hardware manufacturing. Finding an investor has been even more arduous if you just have a concept or a dream. Most investors for a manufacturing venture require a well chartered business plan stating the plan of action. This has kept a lot of startups stale for long periods of time. Crowd funding has changed all that. Through certain incentives to the crowd that funds the project, these startups have now found a way to bring their dreams to reality. Statistics show that Kickstarter has 128,244 projects registered with around $940 million pledged by close to 5 million users across the globe and around 55,000 projects among them have successfully taken off. Although Indiegogo have not published their statistics yet, I’m sure they have started to see heavy investments from users all around.
A little bit of trivia while I leave you to mull over the next venture you plan to start, CES 2014 took crowd funding rather seriously. So much that even their live streaming was crowd funded!