Friday, December 28, 2012

The Power of Business Rules Management

There is a growing interest in business rules and business rules management systems or business rules engines. Major software vendors, from IBM, Oracle and SAP to RedHat and SAS, have or are developing business rules management systems.

The use of these systems to support the practice of decision management to automate and manage high-volume, transactional decisions is growing rapidly. A new standard, called the Decision Model and Notation standard, is under development that will bring consistency of representation to the industry. Yet there is still a sense that this is a niche technology, and it is somewhat poorly understood outside of its traditional areas of strength. So what is a BRMS and how does it support decision management?

What is Decision Management?

First we need to discuss decision management. Decision management is a business approach that explicitly focuses on the management and automation of business decisions, especially the day-to-day operations that must be made to complete an operational process or handle a specific transaction. This approach brings together business rules and various analytic techniques and is widely used to effectively apply BRMSs. While there are many other things that you can do with business rules (e.g., improve data quality, manage user interfaces, etc.), the use of business rules to manage decisions is what makes a BRMS compelling.

If you want to know more about decision management, you can check out my columns, entitled "Building Agile, Analytic and Adaptive Systems" and "Four Principles of Decision Management Systems."

Business Rules Management Systems

A BRMS is a complete set of software components for the creation, testing, management, deployment and ongoing maintenance of business rules or decision logic in a production operational environment. These systems used to be, and sometimes still are, called business rule engines. However not all BRMSs use a BRE at all (some generate code) and even when a BRMS includes a BRE, it is just one part of a complete system — an important part, but one that deals only with execution. A BRE determines which rules need to be executed in what order. A BRMS is concerned with a lot more, including:

  • The development and testing of the rules
  • Linking business rules to data sources
  • Deploying business rules to decision services in different computing environments
  • Identifying rule conflicts and quality issues
  • Enabling business user rule maintenance
  • Measuring and reporting rule effectiveness

To deliver on all this, a BRMS needs a robust set of capabilities for managing decision logic, such as those documented in my Decision Management Systems Platform Technologies Report and shown in Figure 1 (click here or see left for image), The Elements of Decision Logic Management Supported by a Typical BRMS. Specifically, you need:

  • An enterprise-class rule repository with audit trails and versioning.
  • Technical rule management tools that allow technical users (e.g., developers and architects) to integrate business rules with the rest of the environment and edit/manage technical rules.
  • Non-technical rule management tools that allow business analysts and even business users to routinely change and manage business rules (see below).
  • Verification and validation tools, usable by both technical and business users, that take advantage of the nature of business rules to make sure they are correct and complete.
  • Testing and debugging tools to confirm that you get the decisions you were expecting.
  • Deployment tools supporting multiple platforms that allow the logic you have specified to be deployed into a decision service (see below).
  • Data management capabilities to bring real enterprise data into the environment so rules can be written against it.
  • Impact analysis tools so that a user can see what the impact of a change will be before he or she makes it.
  • Either a high-performance BRE to which the rules can be deployed or an ability to generate code that can be deployed.
  • An ability to support the logging of rule execution, so you can tell exactly how a particular decision was made and which rules were executed.

It should be noted that a modern BRMS is likely to support the management of rules derived from optimization and analytic tools as well as rules specified explicitly by a user.

Decision Services

Decision services are the link from our BRMS, to a focus on managing decisions, to decision management. These are sometimes called transparent decision services, agile decision services or even decision agents. Decision services are the key technical deliverable from the combination of a BRMS and the decision management approach. A decision service is a self-contained, callable service or agent with a view of all the information, conditions and actions that need to be considered to make an operational business decision. Deployed on a service-oriented infrastructure and available to other services, to service-enabled applications and to business processes managed using a business process management system, decision services package up all the rules (and any analytics) that go into making a decision.

Decisions can often be thought in terms of a question for which there is a known allowed set of answers. For instance, a decision about routing an insurance claim might be thought of as the question, “How should we handle this claim?” Allowed answers include auto pay, fast track, refer to claims adjuster or refer to fraud investigation group. A decision service handling claims routing would take the information about the claim and then return one of the allowed answers to a calling process or service. In other words, a decision service answers a business question for other services.

One of they main focus these days.

Wednesday, December 12, 2012

Here's a Better Way to Remember Things

A group of Brazilian entrepreneurs who have come north for a week's worth of ideas on growing their ventures, are leaving a class, when one of them breaks from pack toward the coffee maker, where I'm heading too. He works the machine first, reciting something again and again in Portuguese as he watches his cup fill.

"Excuse me?," I say, unsure he's talking to me.

"Sorry, I am repeating what the lecturer said," he explains, "so I remember later."

Remembering new information is an underappreciated skill. The fact that most of us have never evolved our technique beyond the rudimentary and ad hoc approaches we used as middle schoolers suggests this. It is required for any sort of professional growth, since the need to learn is high, and can separate the exceptional performances from the mediocre ones. After all, would you prefer to hire the consultant who presented using cue cards or the one who pitched from memory?

Fortunately for us, insights from cognitive psychology have vastly improved our understanding of how we remember. Many of these are accepted wisdom in the neurological and psychological realms. But it hasn't been easy to transfer that knowledge to actual tools for individuals. Until recently, anyway. Easy-to-use auto-analytic tools that exploit our understanding of memory can now help you treat remembering as the skill it is, and improve it the same way you improve any professional skill, like public speaking. Here's how to get started.

First, focus on the right unit of measure. Yes, your objective is to remember better, but you'll get the best results by focusing on forgetting as your base unit of analysis.

Experimental psychologist Hermann Ebbinghaus's pioneering discovery of the forgetting curve shows that we forget the majority of newly learned information within hours or days, unless we review it again and again. This alone won't be a shock to many of us. But Ebbinghaus demonstrated how systematic forgetting. It occurs exponentially on a predictable curve — researchers call this "exponential decay."

scenariosblue.jpeg

Different things you're trying to remember will have different curves. For instance, that piece of operations data that you remember clearly, since you prepped and presented it to your team, has a flatter downward curve (you'll remember longer) than that the now hazy sales figure a colleague mentioned during the same team meeting. Evenso, each curve is predictable.

Practice remembering at the right time. Think about how you really use your memory for things that matter to you and your career, like in preparing for a speech. Maybe you're a crammer who tries to prime your memory by doing as many dry-runs as possible the night before. Or perhaps you've committed to ploddingly rehearsing your lines each afternoon for a month from 3 pm to 4 pm. Or maybe you're an improviser who finds time here and there, rehearsing what you'll say at random moments between meetings.
The forgetting curve suggests you should follow a very different memorization process than any of these entail. It shows that there's a precise moment that's best for practicing your lines. That moment is just before you are about to forget them.

So sessions aimed at learning new content should happen at "about-to-forget" moments, with spaces between practice sessions increasing as you approach mastery. This learning process is called spaced repetition, and can help us avoid the inefficiencies and risks of ad hoc memorization methods like cramming.

Incorporate auto-analytics tools. OK, so you get the idea that you should try to commit things to memory only when you are just about to forget them. But how do you know when that critical moment is about to happen? How do you know what your forgetting curve looks like?

Almost like your fingerprint, your forgetting curve is very different from anyone else's. But a type of auto-analytics tool called "Spaced Repetition Software" or "SRS" can learn the idiosyncrasies of your memory, and then ping you to practice at the optimal time.

These mobile and desktop tools are like automated flashcards, though you work through your "pile" according to your personal algorithm and the rules of spaced repetition.
They fine-tune your algorithm using a straightforward rating system. Let's say you're a newly appointed manager learning some finance for the first time, and you're trying to improve your recall of many new terms. When the term "Leverage" appears you recall its meaning effortlessly and assign it an A. But when "Arbitrage" appears you assign it a D since you must labor to recall its basic meaning, and even then it remains fuzzy.

The tool continually hones its prompts based on your input. No doubt you'll see "Arbitrage" sooner than "Leverage," as practice sessions for the second concept would be scheduled later and less frequently to maximize efficient memorization.

Map your practice to your priorities. Finally, be very selective when choosing what you want to get better at remembering. In theory, you could work on mastering numerous new domains at once, but experimental research and case studies suggest this isn't practical for full time workers.

Focus instead on a single development opportunity integral to your career. (See the accompany chart for examples of cases where you could use SRS.) Does this opportunity require learning new terms, concepts, or narratives? If yes, then it makes sense to focus on hacking your memory with these computing tools to pursue it.

In short, when you're on a steep learning curve, remember the forgetting curve, and then beat it.

very useful, especially the tips to improve one's memory

Thursday, December 6, 2012

What a Big-Data Business Model Looks Like

The rise of big data is an exciting — if in some cases scary — development for business. Together with the complementary technology forces of social, mobile, the cloud, and unified communications, big data brings countless new opportunities for learning about customers and their wants and needs. It also brings the potential for disruption, and realignment. Organizations that truly embrace big data can create new opportunities for strategic differentiation in this era of engagement. Those that don't fully engage, or that misunderstand the opportunities, can lose out.

There are a number of new business models emerging in the big data world. In my research, I see three main approaches standing out. The first focuses on using data to create differentiated offerings. The second involves brokering this information. The third is about building networks to deliver data where it's needed, when it's needed.

Differentiation creates new experiences. For a decade or so now, we've seen technology and data bring new levels of personalization and relevance. Google's AdSense delivers advertising that's actually related to what users are looking for. Online retailers are able to offer — via FedEx, UPS, and even the U.S. Postal Service — up to the minute tracking of where your packages are. Map services from Google, Microsoft, Yahoo!, and now Apple provide information linked to where you are.

Big data offers opportunities for many more service offerings that will improve customer satisfaction and provide contextual relevance. Imagine package tracking that allows you to change the delivery address as you head from home to office. Or map-based services that link your fuel supply to availability of fueling stations. If you were low on fuel and your car spoke to your maps app, you could not only find the nearest open gas stations within a 10-mile radius, but also receive the price per gallon. I'd personally pay a few dollars a month for a contextual service that delivers the peace of mind of never running out of fuel on the road.

Brokering augments the value of information. Companies such as Bloomberg, Experian, Dun & Bradstreet already sell raw information, provide benchmarking services, and deliver analysis and insights with structured data sources. In a big data world, though, these propriety systems may struggle to keep up. Opportunities will arise for new forms of information brokering and new types of brokers that address new unstructured, often open data sources such as social media, chat streams, and video. Organizations will mash up data to create new revenue streams.

The permutations of available data will explode, leading to sub-sub specialized streams that can tell you the number of left-handed Toyota drivers who drink four cups of coffee every day but are vegan and seek a car wash during their lunch break. New players will emerge to bring these insights together and repackage them to provide relevancy and context.

For example, retailers like Amazon could sell raw information on the hottest purchase categories. Additional data on weather patterns and payment volumes from other partners could help suppliers pinpoint demand signals even more closely. These new analysis and insight streams could be created and maintained by information brokers who could sort by age, location, interest, and other categories. With endless permutations, brokers' business models would align by industries, geographies, and user roles.

Delivery networks enable the monetization of data. To be truly valuable, all this information has to be delivered into the hands of those who can use it, when they can use it. Content creators — the information providers and brokers — will seek placement and distribution in as many ways as possible.

This means, first, ample opportunities for the arms dealers — the suppliers of the technologies that make all this gathering and exchange of data possible. It also suggests a role for new marketplaces that facilitate the spot trading of insight, and deal room services that allow for private information brokering.

The most intriguing opportunities, though, may be in the creation of delivery networks where information is aggregated, exchanged, and reconstituted into newer and cleaner insight streams. Similar to the cable TV model for content delivery, these delivery networks will be the essential funnel through which information-based offerings will find their markets and be monetized.

Few organizations will have the capital to create end-to-end content delivery networks that can go from cloud to devices. Today, Amazon, Apple, Bloomberg, Google, and Microsoft show such potential, as they own the distribution chain from cloud to device and some starter content. Telecom giants such as AT&T, Verizon, Comcast, and BT have an opportunity to also provide infrastructure, however, we haven't seen significant movement to move beyond voice and data services. Big data could be their opportunity.

Meanwhile, content creators — the information providers and brokers — will likely seek placement and distribution in as many delivery networks as possible. Content relevancy will emerge as a strategic competency in delivering offers in ad networks based on the context by role, relationship, product ownership, location, time, sentiment, and even intent. For example, large wireless carriers can map traffic flows down to the cell tower. Using this data, carriers could work with display advertisers to optimize advertising rates for the most popular routes on football game days based on digital foot traffic.

There are many possible paths to monetize the big data revolution ahead. What's crucial is to have an idea of which one you want to follow. Only by understanding which business model (or models) suits your organization best can you make smart decisions on how to build, partner, or acquire your way into the next wave.

Big Data Business Models.jpg

Interesting usage of Big data