Sybase Business Intelligence Solutions - Database Management, Data Warehousing Software, Mobile Enterprise Applications and Messaging
Sybase Brand Color Bar
  blank
 
 
 
 
 
 
 
 
 
 

The Case for a Single Repository Revisited

February 9, 2012 in Business Intelligence,Enterprise Architecture,Metadata,Metadata Management,Regulatory Compliance | Comments (0)

Tags: , , , , , ,

To follow the thought process from The Case for a Single Repository, a central repository also makes it easier to deal with compliance audits when and if they happen. Having all your information in one place makes you better prepared for audits, and being better prepared increases your chances of survival.

For the most part, cases against having a single repository come from the challenges of getting one set up. I always hear people say, “I want it, but I can’t defend the cost.” I won’t pretend that it’s not difficult to set up and can be costly. But it can be one of those cases where you pay now or pay more later. Setting up a central repository is a big project, especially if your data is currently spread out all over your organization, in different tools, adhering to no set of standards.

Over time, however, you might find that without a single repository, you’re constantly redoing the same, frustrating work, chasing things down, trying to fix inconsistencies. Your results may be error prone and unreliable as a result, because the more you do manually, the greater the risk that the information will be mishandled at some point — unless you involve a lot of people and a lot of checks and balances in the process.

Is a single repository worth it? It depends on the complexity of your organization, and on your perspective.

If your environment is complex enough, and if you have a clear sense of your organizational inefficiencies and how they come into play as you manage information, then a single repository will take the pain away.

Then, it’s worth it.

Using metadata management tools that are fully integrated with a single repository takes away some of the pain required to set it up, and increases the speed to ROI. That makes it easier to sell to the business. It also — and this is the most important part — gives you the decision making power you need to transform the enterprise.

Your Big Data is Not Alone

February 2, 2012 in Big Data,Business Alignment,Business Intelligence,Information Architecture,Modeling,Uncategorized | Comments (2)

Tags: , , , ,

I’ve been talking to a lot of folks lately on Big Data. One thing I keep stressing with any talk on Big Data is that most Big Data is big for a reason: it is a high volume, low value type of data. What I mean by that is a single element (or transaction if you will) of something in the realm of Big Data is, by itself, not very interesting. It is the patterns and trends that materialize out of looking at massive amounts of these elements that start to become interesting.

However, this analysis on just the Big Data is never enough to tell a full story. These Big Data sources (RFID and sensor, social network, web page, geospatial, scientific, multimedia, and unstructured communications (email, video, chat, etc.)) are inputs to an overall analysis we need for decision making. We need to add the Big Data trend analysis with data from more traditional, low volume, high value data sources. These high value transactions (customer records, order and payment records, and others where individual elements are of significant business or organizational importance) are more ordered and structured in definition.

So – Big Data is not alone – it is part of all managed data, and needs to be understood together with highly structured elements. In other words, any effort to model and use Big Data needs to integrate with the efforts we use to architect information for the enterprise. Techniques like business glossaries and conceptual models that define business information context will be essential to aligning Big Data sources to match all other data sources. We need to unify the understanding at a high enough level of abstraction so that we can come up with a unified view. Big Data is nothing without the rest of the available information lending full context – and modeling (not physical but conceptual) will be essential to align everything.

And what, exactly, is Data Modeling?

January 26, 2012 in Big Data,Information Architecture,Modeling,Unstructured | Comments (0)

Tags: , , , , ,

My blog last week on defining metadata, metadata sources, and purposes, has had folks come to me and raised this follow-on question. I just spoke with The Briefing Room with William McKnight where one of the questions I was asked was around the value of data modeling in the world of unstructured data, Big Data, NoSQL environments, and so forth. The answer raises the question – what is the data model? Is it an abstraction of the physical representation of the database structures? As such, we clearly see a decline in value of the data model in face of the schema-less or the constantly changing schemas. But, if it is the abstraction of the conceptual representation of the information, we see a rise in importance.

The language of the business, and the context of data points, provide meaning to the analysis that we want to gain from these non-traditional systems. Longworth’s Information Ladder takes us on this journey from points of data (records collected by recording all our “transactions”) to meaningful information (the collation, aggregation and analysis of points of data by applying context to data). With Big Data, we do not even consider the data points themselves but rather jump right to some trend analysis (aggregation of sorts). Interpretation comes from comparisons to a series of basis points to be used in decision making, taking data all the way to wisdom. The basis points themselves are context and can be modeled.

So – what is a data model? It is first and foremost a way to capture business language and information relationships that provide context to make it useful in decision making activities. It is then specialized into representations of storage paradigms, and ultimately, when appropriate, into detailed designs of physical systems where structures will be implemented to manage, store, move, transform and analyze data points. Today’s data models are way beyond traditional logical/physical representations of database systems implementation. Today’s data models are architectural drawings of the meaning and intent of information – simple, beautiful creations that drive the logic of applications, systems and technology and physical implementations of business information infrastructure.

So, exactly what is Metadata?

January 19, 2012 in Business Intelligence,Data Governance,Master Data Management,Metadata,Metadata Management,Uncategorized | Comments (0)

Tags: , , ,

Everybody’s talking about it, but is everyone talking about the same thing? Every time I talk about metadata with someone, I realize more often that although we are both talking about it, that we’re using the word to represent different things. So – what exactly is metadata?

Metadata is: information about data. It helps describe the data. But, there are two different types of descriptions: descriptions about the data container (how the data is structured, classified, grouped, stored, etc.) and descriptions about the content (additional details relevant to an individual file, thing, record, etc.). Traditional data models are examples of container metadata. Tags describing camera details on a photograph are examples of content metadata.

There are also different levels of detail in the metadata itself – some metadata describes overall context and concept, other metadata describes very detailed physical characteristics. Most content metadata is very physical, very detailed about a specific record or specific information element. Today, a lot of metadata I find being managed about containers is also physical, describing a table structure and its physical data system location, or the path and transformation steps to move that data into a central store. I see this type of metadata being used for mater data management (MDM) or business intelligence (BI). However, the metadata I like talking about most is the more abstract conceptual metadata, the fundamental meaning behind the things we are storing, tracking and describing. This metadata is the knowledge that defines an enterprise wide holistic view. This is the view of what we are doing with our information, when we need it, who cares about it, how we should govern and manage it for quality and integrity, where it is needed throughout our business processes and activities and ultimately, why we need it there at all.

So – Metadata ultimately is three things to me: information about the contexts and concepts, information about the physical containers and storages of these things, and information about the data itself. In EIM circles, the last two are the most important. In Architecture and planning, the first two. Ultimately, we need to care about all three, so we can align the planning and architecture of the enterprise together with the actual data being stored, managed, integrated and manipulated into business decision support and regulatory reports.

Top 3 Things a Data Architecture Tool Needs to Have in 2012

January 12, 2012 in Business Alignment,Impact Analysis,Information Architecture,Metadata,Metadata Management,Modeling | Comments (0)

Tags: , , , , , ,

Data modeling alone is no longer enough to help define and describe an enterprise’s information strategy. Information professionals need help in aligning business viewpoints with the technical implementations, and the tools used by today’s data and information architects will need to provide new enabling capabilities. So, what are the top three things an information architect, data architect or analyst should look for in their next design tool?

1) True Business Glossary. Capturing the language of the business independent of data structure, independent of physical form, is the ideal way to properly identify, classify and manage disparate information sources. Looking ahead, these glossaries are the root for a business language taxonomy, or an ontology, and other tools that will help us in managing unstructured data and big data. With a solid understanding of a single business language, terms can then be mapped to design artifacts (not just data elements making up a data dictionary, but processes, applications, systems, technologies, business capabilities and more can be aligned to this common language).

2) Business Process and other business models. These models define the context for information that we manage. The main idea is this: by aligning all information concepts with all uses of that same concept, across all business units and all business use cases, we can ensure a consistent definition of that information regardless of which subset of the attributes we implement in a given application or technology (including persistent components and XML messages). This not only provides a solid basis for aligned future information systems development, it provides key traceability needed for real impact analysis (not just “What systems are connected to this system” but “what else does the business DO with this data so I can be sure I’m not breaking anything there, either”).

3) Web-based collaboration. This is for modelers as much as it is for non-modeling stakeholders. This one is really about sharing the metadata (knowledge about our information definitions, business processes, business terms & glossaries, etc) with folks that are not using a modeling tool, but have a say in this knowledge set. Perhaps they are business-centric subject matter experts participating in information stewardship, perhaps they are business managers looking to understand capabilities available to their business, or simply business analysts helping IT get the specifications right. Whatever the reason, people want more than just a report, or a read-only website, to browse the architecture definitions. These participants also want to provide direct feedback. While they will likely not be creating diagrams (yet) they will want to share their understanding, and want to do so directly in the same simple UI they are examining the knowledge in.

Before you continue working on your information architecture, I recommend that you evaluate what you currently are using for modeling, and make sure it will be ready for the challenges that lie ahead.

A Happy New Year for Information and Enterprise Architecture

January 5, 2012 in Business Intelligence,Data Governance,Enterprise Architecture,Information Architecture,Metadata,Modeling | Comments (0)

Tags: , , , , , , , ,

Last year, I visited many organizations worldwide in many industries, all struggling with information problems. These information problems were not caused simply by challenges in identifying physical data sources and linking them together. The root cause of information problems stemmed from gaps in understanding between the business need for information and the IT facilities and infrastructure to deliver. Couple this with the need to mobilize information with far greater volume in far less time, the need for architecture has grown from a “nice to have” to a “cannot live without”.

Looking ahead, I personally expect that many more organizations will expand from workgroup focused data modeling to enterprise information architecture and enterprise architecture. New investments will be made in managing business glossaries together with enterprise conceptual data models, aligned with business processes and leveraging the logical/physical models already developed. The design-time repository as a single version of the truth for architecture and planning will become a “must have” asset. Information and enterprise architects will develop metadata repositories, or knowledge bases, that ensure all stakeholders are driving towards the same conclusions – increasing need to align multiple viewpoints and perspectives. Development and run time metadata will integrate with design time to ensure that planning and implementation are in sync.

Whether you are dealing with Master Data Management (MDM), Business Intelligence (BI) or increasing agility with growing data volumes & source types (structured, unstructured, Big Data, etc.) the principals of information and enterprise architecture will drive the plan that leads to the successful delivery of new information systems, new business processes, new applications and new technologies. Whether it is movement to the cloud, incorporating mobility, or simply running better business, planning will be key to developing effective ways to manage and drive enterprise change.

I look forward to 2012 and exploring all of these topics, and more, in detail.

The Evolution of Architecture Tools

March 31, 2011 in Enterprise Architecture,Information Architecture,Modeling | Comments (0)

Tags: , ,

A recent “trip down memory lane” had me thinking about how different architecture and modeling tools are today than when I first started in this business over 20 years ago.

Enterprise Architects today need to understand not just how business processes relate to various projects and technologies, but really how the business itself is shaped and formed at every level of abstraction. Today, we need to be able to define the business context independent of any business or technology implementation to develop the blueprints that shape the company structure, form and actual processes. We need to be able to relate that to the design of the enterprise, and how we plan to structure the ways and means by which we achieve enterprise goals. Finally, we need to drive that into the technology used to support implementation.

Gartner was quoted in a recent Information Management article talking about ITs shift from technology provider to strategic collaborator, and that is exactly my vision. We want to move the architects of the enterprise away from responding to business strategy and back into the executive leadership strategy room where this role belongs, helping shape the next generation plans through enabling technology. The tools we’re building today support more than implementation concepts, but also business strategy concepts and alignment techniques to link and synchronize all of this together, and that’s where we’ve been going all along.

Data: Your Best Asset or Worst Nightmare

February 7, 2011 in Business Intelligence,Data Governance,Enterprise Architecture,Information Architecture,Metadata,Metadata Management,Modeling,Uncategorized | Comments (0)

Tags: , , , ,

As I was re-reading David Loshin’s recent white paper the other day, another of his points caught my eye. To quote the article: “The absence of data governance threatens business success.”

That’s a very powerful statement. Loshin then goes on to prove the theory by stating that without governance, we’ll fail at repurposing data. In short: Data can be your company’s best asset or worst nightmare. I reused this concept in my recent webcast to the financial industry earlier this month (of the same name as this blog post).

Tools for enterprise resource planning (ERP) or customer relationship management (CRM) require reusing data from other systems. If that data is untrustworthy, it affects our ability to meet the business needs, as well as the compliance requirements of Sarbanes-Oxley or other regulations. It’s not just that we need to understand the data, but we need to know we can trust it. That’s what governance gives us.

The collective wisdom behind Wikipedia says that governance includes the administration of data quality, management and policy, as well as the business processes and risks associated with the data. If we want to succeed in these areas, then we need to improve data quality. And that raises the question: Who owns the data and can improve its quality?

You may think the business unit that owns the database and applications that use it owns the data. But that’s too narrow a definition.

The way to change this flawed perception is to establish the income potential of data, and look at it as something that provides value to the business just as any other good or service. If that’s the case, then of course data needs governance.

Data governance best practices include having a governance body in the organization that embodies the concept of stewards and stewardship. This concept is key because it establishes ownership of data and how they’re put to use in a way that is continuous and permanent, outside of a project, system, or business unit. It ensures that someone (or several people) will be tasked with understanding data at enterprise level and making certain that it’s used accurately in order to improve quality and reduce risk.

With the right tools and technical infrastructure to support this kind of stewardship, well governed data becomes a true corporate asset.

What Is Impact Analysis?

January 28, 2011 in Enterprise Architecture,Impact Analysis,Information Architecture,Metadata,Modeling | Comments (0)

Tags: , , ,

While I have been travelling recently, giving some talks on modeling and metadata management, I have had a lot of people ask me the same question, “What is impact analysis?”. As I’ve been giving the answer a few different ways depending on who is asking, I’ve come to the conclusion that there are a lot of different ways to think about it.

Or, more precisely, there are four main viewpoints to take into account: business, architecture, design and implementation. Each discipline will have its own perspective. So, the answer to “What is impact analysis?” depends on who you are. When you’re evaluating a modeling tool to provide impact analysis, it’s important to consider all four viewpoints.

From a business perspective, impact essentially means cost and risk, as well as what options exist to minimize both. Impact is measured by how much a thing costs and what the downstream damage may be if things go wrong.

Architects tasked with estimating risk and cost for the business need to know where change is going to occur, what dependencies relate to the change, and what the complexities are. If they can identify those things, they can get a pretty good sense of risk and cost. But they also need to factor in the implementation workload to get a true estimate. For that, they go to the designers.

Designers require a much more granular level of detail. They need a list of the servers, systems, applications and databases involved. They need to know how much and what kind of code is required, and if any software and hardware updates are necessary. In order to answer these questions, they tap the implementers (or developers).

Ultimately, the implementers (or developers) need to understand all the new pieces of code that would be required, where they’d need to be added and how to check in and roll out the changes. They are also interested in what resources are required, and what are the agreed upon, or required, delivery dates.

So, in order to ensure that we know about the cost and risk of a proposed change, we need to know the costs and risks of making the change, which servers, systems, application and databases will be involved, and when we get right down to it, how every bit and byte will be implemented by whom.

In short, effective impact analysis requires that we know everything at the business, architecture, design and development levels. It requires that we capture each viewpoint, and make it available to the people who need it. Additionally, it requires that we are able to sync those viewpoints, so that when there’s a change at one level, it gets implemented at all levels.

How do you manage all of this? With the right tool. The right tool can both capture knowledge at all levels and present the impact analysis for each one. The right tool can give the right person the right viewpoint at the right time. The right tool can sync all viewpoints so that everyone can see his or her own information, as well as the different levels that all roll up to management.

So you can see that impact analysis isn’t just one thing, but rather a federation of a lot of things working together.

Putting Information Architecture First

January 18, 2011 in Business Intelligence,Information Architecture,Metadata,Metadata Management,Modeling | Comments (0)

Tags: , , , ,

I just read a paper from David Loshin , Data Governance, Data Architecture, and Metadata Essentials. David’s an expert on data management. Reading it got me thinking about enterprise information architecture again, and how it helps us focus on business information first, and technical implementation as a means to the end.

Information Architecture is a key subset of Enterprise Architecture. It helps align business information, it all stems from the idea that there is this “enterprise concept vs. project context”. In essence, we should consider projects as pieces of a whole vs. a singular, standalone implementation. By looking at the big picture we are able to see the sum of all the parts and we can begin to see the need to understand each piece and its role within the big picture.

It boils down to knowing both the primary and secondary uses of data.

For example, say we’re building a Point of Sale (POS) application that will track some details about the customer. The purpose of the POS application is to record sales transactions so we can do inventory, calculate commissions on each sale, and ensure we’re collecting the right amount of sales tax. We may need to know a customer’s name, payment details and phone number to process the order, but nothing more.

But what if we looked at the customer from a different perspective, let’s say Marketing’s Customer Relationship Management (CRM) perspective. The CRM application helps marketing know how to optimize business, and to achieve what they need to know about the customer in more detail than with POS applications. Marketing wants to know age, gender, income range, and other demographical details that will help them shape how they service the customer. The best source of information about the customer will probably be the POS application, but the POS development team did not consider the CRM use cases when implementing the customer structures, the details marketing is looking for are simply not going to be there.

So – we have 2 choices, try and find other sources of customer demographic information, or change the POS application to take more detail into account. This idea of defining ‘customer’ as a superset of all the details needed by all the business processes is a key component of information architecture, and really the key takeaway I had from reading David’s paper.

Essentially, before we build the POS, we need to define the right architecture for the definition of the customer in our enterprise, for all business uses, both primary (IT) and secondary (business). This is David Loshin’s core concept in his paper (Read Here). That’s why it’s important to capture the right concept of information first, before we design and implement any systems, projects or applications.

By doing this, we’ll be able to repurpose the information more readily, achieve easier and faster data integration, and provide better decision support and analytics.