The Hadoop Summit 2016, San Jose – Through the eyes of the MSRCosmos big-data & analytics team.
Blog - Big Data Analytics, Hadoop
Believe it or not, we are yet to fully come out of the hang-over of the exhilarating event, which was as much energy-sapping as it was gratifying.
Imagine two days of (rigorous) training on big-data, followed by inspiring key notes, the uber-cool exhibits – cutting-edge technologies, custom-solutions, some really out-of-the-box ideas (clichéd? yeah – it appeared most befitting), unique presentations (Proofs-of-concept, use-cases, live demos, etc.), and the fantastic opportunity to network for all in the big-data and Hadoop realm. Did we mention Hortonworks celebrating 10 years of Hadoop success on the 2nd day – accompanied by lilting music, great food and drinks?
What more could one ask for?
Now, let’s tell you what all happened at the summit, as we saw it.
Apache NiFi & Hortonworks DataFlow (HDF) rock!
To put it simply, these two phrases were the torch-bearers for everything else that was to transpire in the conversations during the summit. Why?
Two platforms that have their identity for data-in-motion (collecting, curating, analyzing, and delivering real-time data) were the top buzzwords throughout the event: the Hadoop Summit bears the name of the platform that has revolutionized processing data-at-rest.
Obviously, there was a definite connection as not all the data coming in from a connected world (especially the Internet of Things) is going to readily have insights in real-time for stake-holders. Some of it has to make do as historical data or data-at-rest. Which means, the data-in-motion and data-at-rest processing combo –in parallel with Hadoop– has relevance in many operational and business scenarios. That is precisely why Apache NiFi, which enables bi-directional data flow (and much more), and Hortonworks DataFlow (HDF) , which collects, curates, analyzes, and delivers real-time data from IoAT (Internet of Anything) to data-stores with help from Apache NiFi, were much talked about and endorsed in the Hadoop event.
Enthusiasm at its peak!
Although the turn-out appeared a tad less compared to last time, it was a good mix. There were companies (about 40% of them) who were exploring options through the big-data path to business transformation, while the rest (almost 60%) were those with existing environments, who had come to seek answers to very specific questions relating to big-data implementations.
However, if one gauged the success or fruitfulness of a community event like the Hadoop Summit, based on the quality of representation in terms of industries and the level of executives representing them, then this would have to go down as one of the most successful ones in recent history. From the tech-savvy commoners, to experts and business heads representing various verticals including life-sciences (pharma, health-care, medical devices, clinical trials), universities, manufacturing, retail, e-business, insurance, energy, utilities, and financial services – the summit had them all. Every one of those participated in the event with much zeal, and curiosity even, to explore the best use-cases and some readymade solutions, if any, for respective industries.
They were looking to talk to the right people to get the right information and, importantly, the way forward with their big-data projects as indeed the predicaments thereof.
There were many novel presentations for sure, but the overall and defining under-current of discussions throughout the 3-day event revolved around finding ways to maximize business benefits from existing Hadoop environments.
That inevitably led to further and very detailed discussions on the pain points of their current big-data initiatives or projects, and finding their solutions.
Some of the major concerns / challenges / requirements we observed were:
- Security & administration issues
- Data fine-tuning & analysis needs
- Data clogging issues
- Increased ingestion cycles hurting analytics, decision-making, etc.
- Inability to ingest live-streaming data
- Migration requirements
While we expected it to be a crippling issue & were well prepared to present our fare, we were not expecting this huge interest in finding answers to their security & administration related problems with regards to their big-data and analytics implementations. The optimism pervading the pre-event air, as far as we were concerned, related to finding new avenues for revenues (eventually) from the business data. Nonetheless, we were more than willing to pay attention to what the concerned businesses had to say about their security and administration related problems. Even companies that had a matured big-data set-up expressed similar concerns.
But there was help at hand for them from our big data CoE team which, from then on, also gave multiple sessions (at our booth E26) on “How to keep big-data environments secure”, be it on cloud or on-premises or hybrid environments, and elaborated on the multi-level security solutions.
Big-data analytics adoption…
We are aware of the fact that most of the business world has come to acknowledge the importance of big data analytics. However, not all have a set-up in place that will help them make good of their business data. One of the major impediments to this is the initial investment required (some actual while some perceived) as indeed the extensive preparation for the same.
We are a comprehensive big data analytics solutions company, aren’t we? How can it be comprehensive if we didn’t have a solution for this too!
The solution is our ‘’Big-Data Jumpstart” program which helps companies to get a quick-start to their big-data projects. The same was demonstrated to many customers (convinced of big-data possibilities) with an intention of rolling-out initiatives for data analytics. In fact, many attendees whose companies don’t yet have a big-data set-up but were on the verge of implementing one (and defining the road map), felt that the Jumpstart program would be very useful to them. One of our existing customers currently contemplating a big-data analytics implementation, was also one of them.
Although, we felt that not everyone who is in the big-data space, or at least not everyone who could benefit from it, was there, there were enough positive vibes for the likes of us to feel elated and excited by how the event turned out to be.
Seeing the big-data space through the eyes of the customers, and being able to explain why and how big-data analytics will help their businesses, kept us going. We presented enough to sustain the interest through very detailed use-case scenarios. HCube received enormous response from many customers who were suffering with issues like data clogging, increased ingestion cycle, unable to ingest live-stream data, etc. HCube, evolving into a one-stop solution for all the ELTA (extract, load, transform, & analyze) needs of big-data implementers (with an in-built scheduler to boot!), also greatly impressed many technical experts as a perfect solution for their burning needs.
Further, our industry-specific packaged solutions for many industry verticals added extra zing to the conversations with visitors (to our booth). We strongly feel that, with the very good responses we got for our industry-specific packaged solutions, HCube, big-data jumpstart, and CoE that enable process / model transformation as indeed business realization, the myth of big-data being a vision without reality was broken to a good extent.
Rounding it off with Grand Raffle…
Our celebrations of the event had begun on the concluding day itself when we arranged a grand raffle for participants. We raffled off Samsung Gear (watch) and Fitbits to the lucky winners!
We continue to help business leaders such as CIOs & CTOs in becoming business enablers, by providing them access to proven and exemplary big-data analytics expertise through our big-data COE and a full range of big-data managed services.