March 28, 2024

How Pricing Experts Make More Money in Gas Markets
Of course, gas companies sell gas.
But price and complex contract terms are the real product sold in today’s gas markets.

by Joe Franz, Product Manager, LODESTAR
Gas companies understand very well that price is important. But saying something is important is different from saying that it is your business — which, if true, implies a fresh perspective. Perhaps the biggest realization is that gas itself is not the key commodity. Information is the key commodity. A price is, after all, a number — a number that reflects the analysis of other numbers, any or all of which are equally available to everyone in the gas business. But there is something else that counts more. That is the speed and cost at which a company can acquire, process, and use the data needed to quote an accurate price at which it will earn the maximum profit for a specific customer. That is the true differentiator.

There are three things gas companies need to understand about the information commodity. First, price is what sells, not gas. Second, price is a unit of information. Third, gas companies must now demonstrate the same type competitive advantage with information management that they once had to demonstrate with gas extraction, processing, transportation and quality control.

Information cost differences is where true opportunity lies in the gas marketplace. Compared to gas costs, information costs vary widely. If a gas company had not already figured out long ago how to produce and transport gas at market cost levels, more efficient competitors would have driven it from the market. The same will someday occur with information. Substantial information cost differences now exist between market participants. For example, it simply takes longer for some gas companies to:

  • extract usage data from billing systems and price curves from trading desks,

  • run that information against marketing models,

  • identify likely prospects for new offers,

  • calculate a price quote tailored to each prospect’s unique situation,

  • present offers to those prospects, and

  • begin executing other information systems, e.g. billing, from those contracts.

Those delays add costs:

  1. loss of business caused by delayed or mispriced quotes,

  2. unprofitable customers caused by mispriced quotes,

  3. increased risk caused by unhedged offers, and

  4. increased operating expenses dealing with unintegrated, manual processes.

Over time the market will wring out these information cost differences between players — just as it has for the gas commodity. All players will become information efficient or leave the market. Until then, more information efficient companies have a significant market advantage.

Breaking Down the Information Commodity
Data, like any commodity, can be enriched. When it comes to selling a particular gas customer on a particular pricing formula there are certain “raw ingredients” to mix:

  • Market pricing curves

  • Competitive retail prices

  • Proximity to pipelines

  • Price sensitivity

  • Interruptible loads

  • Ancillary charges (spinning reserves, etc.)

  • Loyalty ratings from market research

  • Usage levels at different time periods

  • Green ratings

  • Imbalance calculations

  • Delivery charges

  • And many others ...

Depending on the end user, these factors become essential to providing accurate and timely pieces based on individual customers. For existing customers, those would be contracts that commit the buyer to a longer period of time. For prospects, those would be contracts that are more competitive than contracts from other gas companies (a process called “blending and extending”). In both cases, the incentive for switching is usually not just a lower price (although it might be) but also a better pricing formula. Four types formulas are:

Flat rates
price stays the same over a specified number of months or years

Floating rates
price is indexed to published market prices

Caps and collars
customer pays a floating rate, but one guaranteed to stay below a certain point (cap) or within a certain range (collar)

Shared savings/shared risk
customer pays a floating rate. If price goes below a certain point, the difference is applied to a savings account. If the price goes above a certain point, the customer can draw from the savings account to pay the difference — potentially resulting in a negative balance. Retailer receives half of any leftover savings; and contributes half of any leftover negative balance.

It’s not just price alone customers find compelling but the way the price is packaged (or “productized”) to meet their needs. Those needs reflect various risk tolerances, price sensitivities, load requirements, load schedules, and other characteristics. Running the database of customers or prospects through the model yields the most enriched information products of all — the names of customers that score high in their likelihood to buy (or buy longer) when offered a specific pricing formula.

Breaking Down the Process
The need to correlate different pricing formulas against different customer types will not come as news to most gas retailers. Most are already doing that. What separates more profitable retailers from less profitable ones are the speed and efficiency of how that is done. It is noteworthy that many retailers that apparently understand well the importance of efficiency in bringing to market a generic commodity (i.e., natural gas) are still less than efficient in bringing to market a differentiated one (i.e., information).

As a commodity, information really is a lot like natural gas. Functions like extraction, processing, transport, and quality control are critical. These are also the very areas in which gas companies typically hurt themselves in handling information:

Extraction
Can pricing systems automatically extract requirements data and delivery points (i.e., when and where customers will use gas)?

Can systems easily accept manual inputs from salespeople about future operations (e.g., summer shutdowns)?

Can systems easily accept outputs of predictive statistical models?

Can systems accept pricing curves directly from trading desks?

Processing
Do business analysts rely on spreadsheets or, worse, pad and paper to score customers and configure best offers?

Do business analysts rely on programmers to change pricing algorithms, and data validation rules?

How are contracts written — manually using a word processor or do systems automatically compute or conditionally apply terms and then automatically insert appropriate language into contracts?

Transport
Does information move smoothly between functions — e.g., from trading and billing to pricing and contract writing, all the way to the customer and back? Or does data have to be reentered manually system to system?

Do workflow tools exist to help manage information flows and process steps, ensuring repeatability and best practices?

Do functions tend to be web based, allowing efficient access regardless of geographic location?

Does the pricing system tie forward books directly to the wholesale and retail contracts, and to the delivery points behind them?

Quality Control
Will systems automatically smooth spikes and dips and extrapolate missing monthly meter reads?

Can business analysts easily enter data validation rules into systems rather than rely on computer programmers to do it?

Will systems check for invalid data, and correct it automatically or else flag it for manual follow-up?

Do tools easily visualize and correct data validation issues on screen?

Is the pricing system completely auditable, trackable and controllable? Does it store all aspects of the deal and provide flexible reporting capabilities to allow proper management of gas operations?

Looking at these functions, there are generally three kinds of information costs companies incur: 1) The “frictional costs” incurred moving data across organizational, technological, and geographic boundaries; 2) the cost of “dirty” data; and 3) the costs of using inefficient processing methods.

Frictional Costs
The most obvious sources of friction is when data from one system has to be manually entered into another — say, when usage data from billing or price curves from trading must be manually reentered into systems that generate offers based on usage patterns and price curves. The friction is obvious when there are delays and people make mistakes entering data. The mistakes mean that customers get offered the wrong price-products and the company is potentially exposed to unhedged risks. Delays mean that customers are lost or that deals are mis-priced either because a customer’s pattern of usage has changed or (more likely) because the price of natural gas has changed. An extreme (but too common) example is when market price spikes occur, when a gas company might want to suspend making offers altogether — and retreat from the market rapidly.

Even when systems do talk to each other, sometimes it is over communications links that are less efficient than they might be. Web-based systems, for example, are more efficient than client/server technology. They can be accessed anywhere there is a connection to the Internet using commodity PCs and web browser software. Client/Server tends to be proprietary, which means that access may be limited geographically or organizationally to the physical locations the systems are actually installed.

Dirty Data Costs
It costs money to clean data but it usually costs more not to. Dirty data is data that contains gaps or errors. In addition to data entry errors (already discussed) the other major examples are missing or erroneous meter reads. One cost of using bad meter data is the same as for data entry errors — i.e., mispriced contracts — if the gas company is using incorrect or missing usage data for making offers. If the bad data is used for billing, it can result in uncollected revenues or, alternatively, in exposure to liabilities.

That’s the cost of using it. The cost of cleaning it depends on how efficiently the company can validate data as it is passing through the network and make corrections as needed. Typically, the way to identify dirty data is to submit it to validation rules such as the following, which might apply, say, to a dry cleaning business:

  • No more than a 30% difference between winter and summer usage

  • No more than a 50% month-to-month load variation

  • No more than a 100% variation peak to minimum load during the contract period

If data points exceed these variations (a spike) software can automatically smooth the data or, alternatively, flag the data for human inspection. The same applies to missing data (gaps). For example, if a 12-month period has fewer than six months of missing data for a customer that exhibits consistent usage, the system might be programmed to simply plug in extrapolated values. Otherwise, again, the system might be programmed to kick out the customer record for inspection by a person. Whether or not these automated capabilities exist determines to a great extent how much a company pays to clean up data. The more manual the process or the more difficult to program, the harder it is to clean the data, the longer it takes, and ultimately the more expensive it is to do. A key factor in determining ease of programming is whether business analysts themselves can program validation rules or whether they must rely on software writers. If the former, then not only is it faster, but the chances of “getting it right” the first time are much higher because the chance of misinterpretation is lower. Analysts are also able to tweak the rules on the spot rather than cycle through a series of iterations with developers — making the whole process faster and more accurate.

Another cost factor is how often the same data needs to be cleaned. If four different systems (say, trading, billing, pricing, and settlement) have to each clean the data separately, that is obviously four times the work and four times the delay. A better approach is to store validated data for common access by all systems. Not only will that eliminate redundant cleaning, it will eliminate the possibility that data in one system is out of synch with data in another system — for example, because one system has been updated and another not.

Another way to avoid duplication of effort and out-of-synch data is to store not only the data in one repository but also the data validation rules themselves in one repository. That means that the same validation rules will be applied regardless of which system actually performs the validation — because everyone gets the same rules from the same source.

Processing Costs
The guidelines that apply to increasing data validation efficiency also apply across information processing generally:

  • Reduce redundant effort

  • Ensure data consistency across systems

  • Ensure rule consistency across systems

  • Make it easy for business analysts themselves to implement business rules

Scoring customers is a good example of where efficiency counts. Companies would not want to use different rules in their scoring models from those used to actually select which customers receive which offers. Nor would companies want to make their analysts go through the information services department every time the analysts needed to revise a rule. Business rules, of course, are important everywhere, not just in data validation or customer scoring. The more “English like” the language in which these rules can be written, the more efficiently they’ll be in terms of timeliness and accuracy — and that lowers costs.

Not only would you want the rules easy to implement, you would want to do so only once — regardless of where the rule was invoked or on which copy of the data. In fact, you would only want to store one copy of the data — again, in a single data repository, to ensure that the version of an item used by one system was not different than a version of the item used by another.

Easy-to-write rules and a common repository also have another advantage: they make it easier to address pricing requirements in multiple markets and commodities. All the business analyst needs to do is add the appropriate rules to the repository — not rewrite software on all the multiple systems affected by these rules.

Accelerate the Value Cycle
The payoff from selling a complex product like a cap or a collar is greater than from selling an undifferenti-ated product like natural gas — but only if gas retailers exploit the opportunities inherent in this complexity. For a commodity, the upside is mainly in reducing costs once revenues are locked in. For a differentiated product, the upside is from adding value and bringing it to market faster and more often. Costs matter — as we have been discussing — but some costs matter more than others. The costs that matter most are market opportunity costs, and less so the cost of redundant manual effort or replacing obsolete technology. Compared to the opportunity costs at stake, these incremental out-of-pocket expenses may actually seem trivial. It’s the impact of missing usage data or out-of-synch market pricing that really hurts — and that impact probably will not show up on accounting ledgers. It is the cost of unrealized profit and untapped market growth potential.

The greatest reward for natural gas retailers lies in reducing this opportunity cost — in other words, in accelerating the value cycle. How is that accomplished? It is accomplished by rewriting the customer’s pricing formula (adding value) whenever a change in market pricing curves or customer usage patterns permit. That means accelerating the cycle of extraction, processing, transport, and data quality. The more times the retailer completes this cycle the more value it adds to the product and the faster and more frequently that this added value is monitized. What product? Not natural gas — the purported mission of the business — but pricing formulas, which are the true product in today’s hotly competitive gas market. The cost of supplying that product — both opportunity and out-of-pocket — is a function of the retailer’s information efficiency. Increasingly it is on that benchmark that the value of gas retailers themselves will be judged.