Sybase Business Intelligence Solutions - Database Management, Data Warehousing Software, Mobile Enterprise Applications and Messaging
Sybase Brand Color Bar
  blank
 
 
 
 
 
 
 
 
 
 

You Do Business in the 21st Century, So Why Is Your Database from the 20th?

October 26, 2012 in Analytics,Databases,Mobility,Social Media,Unstructured data | Comments (0)

Savvy CIOs are beginning to come to terms with “technology debt,” the burden of outdated technology they have accumulated over the generations of IT. The need for a fresh start is being driven home by architects and developers in droves who are rejecting the quaint notion of managing enterprise information through traditional databases. Instead, they are seeking to deploy a modern real-time data platform that is flexible enough to support the dramatically changed enterprise computing landscape and powerful enough to handle new data demands in real time.

I can’t think of a single organization that is not undergoing upheaval in its compute environment. And it’s happening faster than most people realize. For example, it took more than 25 years of PC sales before Gartner estimated that one billion personal computers had shipped by 2002. A mere five years later the two billion shipment mark for PCs was surpassed.

Our mobile world is moving much faster. Last year, according to Canalys, smartphone sales exceeded PCs for the first time. And IDC says that this year the combined sales of smartphones and tablets will exceed 895 million units, more than double the predicted the 400 million PCs forecast to ship by the end of 2012.

In addition to supporting vastly more powerful desktops and countless mobile devices, CIOs now need to factor in cloud computing services as well as huge troves of unstructured data found in social media. On top of that, IT departments are increasingly being told to deliver data securely to anyone anywhere and to do so in real time.

Yet, for far too many enterprises, the database infrastructure remains stuck in the architecture of the 1980s. Too often, it’s limited to delivering static reports culled only from structured data housed in overmatched traditional databases.

To be competitive in the 21st century, companies need to start thinking about deploying an enterprise data platform rather than simply upgrading their current database. An enterprise data platform is much broader than traditional database technology. It integrates different data management components and services that have been optimized for specific business tasks. For example, while many databases can run analytics on structured data, a real-time data platform is a complete technology package that includes in-memory design, a columnar architecture and integrated tools like Hadoop. The result: analytics can execute on unstructured data in real-time.

The SAP Real-Time Data Platform, with its open APIs and standard protocols, offers federated access to an enterprise’s entire information portfolio. Through a single platform, IT gets powerful transactional data management, in-memory data management, enterprise data warehouse technology, analytics and mobile data management capabilities. Plus, total information management and real-time data movement services are foundational aspects of the SAP Real-Time Data Platform.

By adopting a real-time data platform instead of a traditional database approach, CIOs can respond quickly to manage information needs for the next upheaval in computing when it arrives. And, believe me, it will arrive.

Voters Reject Tailored Political Promotions

October 9, 2012 in Advertising,Analytics,Government,Social Media | Comments (0)

Next month voters in the United States will cast ballots for the next president of their country, among other political races and referenda. What is different in this election year than in previous presidential contests is social media and what campaigns can do with the data generated there. Although Facebook, Twitter, and other social media sites existed in the 2008 election, they did not have the massive membership that they do today, and Google+, with its growing 400 million users, did not exist.

In 2012, social media sites have jumped into the political arena with both feet. For example, Facebook has partnered with CNN for an “I’m Voting” app, both to encourage people to vote and to generate data about voter preferences. Twitter is using sentiment analysis of tweets to create its Twitter Political Index, a daily snapshot of how tweeps “feel” about the two major party candidates.

All this information from potential voters themselves is unprecedented in American politics. And it’s persuaded some political analysts to salivate at its potential to sway each voter toward their candidate. As one political consultant told Forbes magazine, “Big data enables very precise narrowcasting of messages to target individual voters. That also enables one-one-one communication, and you’re more likely to get a response from a targeted voter.”

The problem is that the response from that targeted voter might just be the opposite of what a campaign manager might expect. Recent survey data from the Annenberg School of Communications reveal that an astonishing 86% of Americans say they abhor political promotions tailored specifically to them as individuals. That figure is well above the 61% who reject product and service advertisement directed to their person.

Worse for campaigns awash in big data, 64% of potential voters say such individualized political advertising would decrease their chances of voting for the candidate who targeted them. Further, if a candidate sends advertisements to their friends if they themselves “like” a candidate’s Facebook page, the likelihood of voting for the candidate decreases among 70% in the survey.

Privacy and trust are at the center of Americans’ concern here. Many are already leery about trusting their information privacy on social networks. So, given that politicians are overwhelmingly viewed as “the least trusted profession” in the U.S., exploiting the targeting power of big data with political promotions in this election cycle looks like a lost cause.

Big Bucks for Big Data

October 2, 2012 in Analytics,Government,Operational BI | Comments (0)

Not everyone believes big data is a big concern. Is it just the latest headline scare to boost a CIO’s budget? Is it merely an incremental step in IT’s growing importance for business? Or is it merely fodder for Gartner’s “hype cycle” chart?

Believe me, it’s real. It’s here. And it’s changing the IT landscape forever.

Although I wrote a column for Forbes called “The Big Lie About Big Data,” my provocatively titled essay was intended to undermine the notion that IT would be overwhelmed by the arrival of big data, not that it was a falsehood. And there’s no better argument for big data’s reality than that CIO’s budget, which is allocating and spending money on big data solutions.

Government’s on both sides of the Atlantic are gearing up for big data investments. In the United Kingdom it’s deemed so valuable that there’s a call to make an Advanced Analytics team part of the Cabinet Office. And the White House has launched a $200 million initiative to tackle big data problems in the federal government.

But it’s more than just data-drenched governments that are spending on big data technologies. It’s business. In a Gartner survey conducted in July 23% of respondents said they would be spending on specific big data problems inside their organizations in the next 12 months; another 31% said they would do the same in the next 24 months. In fact, the market research firm upped its 2012 IT spending forecast in part due to enterprises spending on IT services related to big data.

For me, the proof of whether technology solves a real business problem is when CIOs start spending money on it. That’s happening now and will only accelerate as vendors develop more tools and services that address more big data issues for more companies.

Data Management Discontent: Time for Change

September 25, 2012 in Data Management | Comments (0)

An unpublished study commissioned by SAP and completed in July by IDG found discontent with data management systems brewing among IT executives. The survey of Computerworld’s Inner Circle Research Panel indicated a significant number are so dissatisfied with their current environment that they are considering migrating to a new platform to fix the problems they see.

Changing data management environments is a serious IT endeavor, so the motivation must be compelling, particularly for large enterprises. The 200 respondents represented in the research cut across a broad range of industry segments and size, with an average of more than 15,000 employees. So, what would make these big companies consider such a migration?

No surprise here: money.

According to IDG’s research, “Managing costs and efficiencies across a complex infrastructure” is the top data management problem to wrestle with in the next 12 months for 56% of those polled. It seems most organizations are not getting the value they expect from their current data management environment.

If it were solely a money problem, CIOs could go hat in hand to their CFOs and ask for bigger budgets. It’s also possible CIOs could squeeze their vendors for better contracts to improve a system’s total cost of ownership. Threatening to change platforms can often inspire a current vendor to bend a bit on its margins.

But there are other data management shortcomings revealed in the survey that money can’t fix. For example, 46% of respondents say they need to improve real-time access and analysis of data; 41% need a better data management approach to support a growing mobile workforce; and 20 percent acknowledge that “delivering big data value to internal customers” will need a new solution. As a result of all these and other ongoing data management issues, 49% in the survey “report plans to evaluate new data management approaches and solutions” within the next 24 months.

That’s an astonishing number. Think about it: half of those IT leaders surveyed are willing to spend the time and money needed to evaluate new data management environments because of the cost and shortcomings of their current system. While not all will migrate to a new platform, I suspect many will make the change. That promises tectonic change in the data management industry. Vendors beware!

Dark Data Tells Many Tales

September 18, 2012 in Analytics,Data Quality,Databases,Operational BI,Predictive analytics | Comments (0)

Every enterprise accumulates dark data. Companies don’t try to hoard this unanalyzed information, it just happens because it’s created almost everywhere.

Servers in data centers generate an enormous trove of largely untapped log file data. Manufacturers’ shop floor control systems and robots produce dark data as well as widgets. Little of the data from a retailer’s point of sale system gets mined. Information from diagnostic equipment in intensive care units is generally ignored. The list goes on.

Pick your market sector and the systems they depend on, and you’ll uncover dark data. Organizations simply generate far more data than they can currently exploit.

Finding insight in these mostly ignored data sources is important. According to a report in Forbes, “Organizations that treat idle information, or so-called “dark data”, as anything less than having potential economic benefit will find themselves at increased competitive disadvantage.” And in areas like science, mining dark data should provide major breakthroughs to benefit us all. As Wired magazine put it: “Freeing up dark data could represent one of the biggest boons to research in decades, fueling advances in genetics, neuroscience, and biotech.”

That said, Gartner analyst Andrew White challenges proponents of the value of those untapped massive data repositories inside most enterprises. He writes, “But there is a flaw in this hyped argument about dark data. Unless you, the business user, have an idea of what you want to ask of this dark data, there is no point worrying about it.”

That’s a valid point of view. But it applies to all data. Unless someone has a query that is relevant to a given data set—whether it’s dark data or a basic company balance sheet—there is no point in worrying about it.

However, business users now have the analytics tools capable of ingesting and mining vast amounts of dark data quickly. For example, a manufacturer can connect temporal data from shop floor control systems to product return patterns from customers to determine if there is a time-of-day problem along the assembly line.

I agree with White that business users must have questions for dark data before it becomes useful. But instead of telling them to not worry if they don’t have questions, I’d be concerned about analysts who were not able to ask the right questions. That’s far more worrying.

Not Enough Big Data Hype

September 5, 2012 in Analytics,Data Management,Government,Healthcare | Comments (0)

I understand some people consider the rise of big data as little more than hyperbole. Indeed, for the first time, it made Gartner’s 2012 hype cycle, which purports to “highlight the common pattern of over enthusiasm, disillusionment and eventual realism that accompanies each new technology and innovation.” The research firm placed big data approaching its “peak of inflated expectations,” meaning we’re all about to be disappointed.

Sorry, Gartner, with all due respect, you’re dead wrong. Actually, the expectations from big data are underinflated as far as business, science, healthcare, government, and education are concerned. That’s because the benefits these market segments get from big data are not theoretical, they’re real.

In a post here some months ago I highlighted a few big data success stories for the SAP HANA in-memory database. Others have also underscored big data-driven breakthroughs for a variety of industries. A conference to be held later this month at Temple University that will highlight big data success stories at Wal-Mart, Merck, NASA, and many other organizations. The fact is, if you’re not motivated by the “hype” around big data, your company will be outflanked by competitors who are.

The arrival of big data today is not unlike the appearance in businesses of the personal computer, circa 1981. Like the PC, big data existed long before it became an environment well-understood enough to be exploited. That is, PCs existed in the 1970s, but only a few forward-looking businesses used them before the 1980s because they were considered mere computational toys for hobbyists.

What changed? Combining the PC with nascent tools, such as VisiCalc, Lotus 1-2-3, WordStar, dBase II and other packages made the difference. Suddenly, company executives everywhere could run sophisticated software without needing IT (MIS departments back then) developers to spend months creating programs.

Similarly, we’ve known about big data or “information overload” inside organizations since the term was popularized by Alvin Toffler in 1970 in his best seller Future Shock. But, as with the early years of the PC, we lacked the technologies to exploit the vast amounts of data inside companies.

As they say, that was then. Now we have the technology to exploit the data with relatively low-cost flash memory, multicore processors, hyper-fast in-memory and columnar databases, real-time complex-event processing, Hadoop/MapReduce, rapid deployment of complex analytical models through standards like Predictive Model Markup Language (PMML), and much more. These tools will make big data as pervasive and as useful to business as the PC has been.

Gartner got its hype cycle wrong this time. Big data is already well along on the so-called Plateau of Productivity as its countless success stories already prove. Skeptics who doubt this are like people who once derided the PC as not being a serious computer. Today, it is those big data skeptics that we should not take too seriously.

Punts or Profits: Don’t follow NFL tradition

August 29, 2012 in Analytics,Careers,Databases,Predictive analytics | Comments (0)

American football can be perplexing to the uninitiated because of its complexity compared to the rest of the world’s true football. However, that complexity does have an upside. It creates plenty of opportunities to generate statistics with which to evaluate the game.

Statistics relating to one area of the game, punting, seem to indicate that football coaches are consistently making poor decisions by ignoring convincing data, thus hurting their team’s chances of winning. A study of this issue sheds some light on why company managers similarly might be making poor decisions that undermine their businesses’ chances of improving profits.

Here’s the main idea: During football games there are usually many opportunities to kick the ball back to your opponent when you fail to achieve a first down. But there’s statistical evidence showing such a strategy leads to fewer wins for a given team. So why do teams punt?

One explanation is tradition. Teams have almost always punted on fourth down, except at key moments, usually late in the game when a team is behind. In business situations, that translates to explaining a failed decision as: “We’ve always done things this way.”

A more insidious answer comes from www.advancednflstats.com: “… coaches are thinking more about their job security than their team’s chances of winning. Coaches know that if they follow age-old convention by kicking and lose, then the players get most of the blame. But if they defy convention and go for the 1st down and fail, even if it was the best decision, they’ll take all the criticism.”

As one critic of punting on fourth down noted, there’s a bias by top management to favor logic over information. It’s an article of faith that “…managers who fail to maximize profits for the owners of their firms are likely to be fired and replaced by ones who do. Thus the case for firm maximization rests much more on logical argument than empirical evidence.”

There are three lessons for business here. First, don’t punish managers who make the best decisions on the best data, but don’t always succeed. Such a move will discourage future good decision-making.

Second, while established, well-understood business processes can benefit a company, if data show them to be flawed, eliminate or change them, no matter how comfortable workers are with them.

And third, if you have managers who fail to apply data to their decision making and then push the blame on workers when those decisions prove wrong, it’s time to give those managers the boot.

Big data owes big debt to open source

August 21, 2012 in Analytics,Databases,Open Source,Unstructured data | Comments (0)

Recently, I delivered a keynote at the annual Red Hat Summit and JBoss World conference, an event that drove home the importance of open source technology.

Without open source technology we’d be hobbled applying analytics solutions to big data problems. The Apache Hadoop MapReduce project comes to mind, of course. It has been adopted by companies at the bleeding edge of big data activities, including Amazon, Facebook, Google, the New York Times, Yahoo and countless others. Most major IT vendors, including my company, SAP, and my previous company for 20 years, Sybase, embrace Hadoop.

But open source has deeper roots in the analytics market than one open source project. Development work at Sybase, for example, is done using the Red Hat Linux operating system. The company’s products that target big data problems—SAP Sybase IQ, SAP Sybase Adaptive Server Enterprise, SAP Sybase Event Stream Processor, and more—all run on Linux. And the SAP HANA high-performance, all in-memory database uses SUSE Linux Enterprise Server.

SAP, which is celebrating it’s fortieth anniversary this year, started its life as a proprietary software company where developers once needed to know the ins and outs of the SAP programming environment in order to fully leverage its rich resources. That’s changed dramatically. Instead of having to learn about the Business Application Programming Interface, the BAPI, developers today can use a variety of languages, including Ruby and PHP, then connect to SAP applications via the NetWeaver Gateway.

Throughout the analytics and database industries, with few notable exceptions, open source technology is less considered competition than it is a toolbox. And that’s great news for organizations trying to solve their big data problems.

Big Data Startups Lure Investment Dollars

August 14, 2012 in Analytics,Financial Services,R&D | Comments (0)

Venture capitalists invested slightly less money last quarter than in the same period in 2011. However, most of that decline was in healthcare and energy startups. IT investments remained steady.

And, if the FinTech Innovation Lab Demo Day in New York City last month is an indicator, companies that are developing new products and services for big data opportunities will receive a good portion of those IT investments in 2012. The FinTech event targets financial services firms seeking technology innovators by inviting companies to present their new tools to Wall Street tech experts.

According to a coverage fo the event, the recent “demos in New York reflect the financial industry’s growing emphasis on ‘big data’ solutions – managing and deriving value from the terabytes and petabytes of raw, unstructured data that proliferates on social and mobile networks and the vast IT infrastructures of financial institutions.”

Wall Street’s interest is reflective of the overall investment growth in big data startups. Data from Thomson Reuters indicates that in 2011 venture capitalists invested $2.47 billion in big data-related startups, an increase over the $1.53 billion in 2010 and 2009’s $1.1 billion.

These venture dollars will continue to flow into the pockets of big data entrepreneurs in the coming years, especially if market growth continues apace. And the evidence suggests it will. IDC forecasts that big data-related technology spending will skyrocket from $3.2 billion in 2010 to an estimated $16.9 billion in 2015. IDC’s researchers say at a 40% compound annual growth rate (CAGR), big data business will expand at about seven times the overall IT industry’s CAGR.

Venture capitalists sense an opportunity and are ready to pounce. Big data entrepreneurs need to be ready with ideas and products. Then, after the market does its magic, the rest of us will benefit.

Powering Big Data

August 7, 2012 in Business Continuity,Data Centers | Comments (0)

If you look at today’s typical data center with its racks of blade servers, SANs, network switches, UPSs, and more humming away, crunching gigabytes, terabytes, even petabytes of data, everything seems fine. If you look into the eyes of a typical data center manager, you might see something else. You might see fear.

A survey of data center managers published in June showed data growth ballooning at a rate 25% ahead of last year’s dizzying information expansion. This explosion of data requires more servers, more storage and more bandwidth. These sharp demands threaten everything from the ability to conduct regular backups to simply keeping the lights (and air conditioning) on. Let’s drill down into the issue of data center power consumption.

I don’t want to upset data center managers any more than they already are, but consider that there are 4 billion mobile devices out there today ready to consume information from data centers and, more ominously, create data to put into them, requiring even more hardware and more electricity.

Five years ago, the U.S. environmental Protection Agency issued a report predicting that power consumption by data centers in the United States would double between 2005 to 2010, just as it had done from 2000 to 2005. IDC says power and cooling demand is the number one operational problem for data centers.

But IT systems are not standing still. They are giving new life – and efficiency — to old data centers. Consider that five years ago, when the EPA issued its dire report, Intel-based servers were running two core chipsets at 110 watts. Today Intel’s E7 chipset architecture runs 10 cores with 20 processing threads using only 130 watts. In short, you can do substantially more processing within a substantially smaller power envelope with the latest servers.

And virtualization has improved server utilization dramatically, reducing the need to add more power-consuming machines. By next year, it’s estimated by IDC that every physical server will house an average of 8.5 virtual machines.

And these advances already are paying dividends. Indeed, a study last year by Stanford’s Jonathan Koomey shows that the EPA’s projection was overly pessimistic. Power consumption by data centers grew at 36% in the U.S. between 2005 and 2010, not 100% as predicted. While the recent recession undoubtedly contributed to the lower use of electricity, consider that during that same period we saw dramatic growth in cloud computing, a doubling of Internet users globally and the building of massive, power-hungry data centers by companies like Amazon, Apple, Facebook, Google, Microsoft and others.

Certainly big data will continue to put pressure on IT operations in many areas, including power consumption. However, IT vendors will continue to squeeze more efficiencies out of hardware, assuring that big data will have data centers to process, store and serve it up to users.