Saturday, January 19, 2008

Solid Quality Mentors continue to provide timely information about SQL Server at DevWeek 2008

Why does the business bother to keep investing in IT

Information is the be all and end all of business functionality. Without it there would be no business other than that which can be conducted on a one to one or, in limited cases, one to many basis, with zero complexity in product provision or utilisation.

As individuals form into organisations, the information that is important changes, but it is still called information. We could categorise this information to make life simpler into three different types of things that need to be thought about in a functionally organised business unit.

Operational Information

This information facilitates day to day procedures within the business which ensure that we can record time, cost and product/ service delivery details. This is often referred to as the sharp end of the business. It is the point where the minimum cost decisions can be made if people are empowered to make them and the tactical management is receptive to input.

Tactical Information

Information about the success or failure of the business procedures and remedial action within a midterm period, say 3 months to a year. It is also used to facilitate the monitoring of the success or failure of the remedies. Here were talking about rewarding correct behaviour and facilitating improvement of incorrect behaviour in handling immediate business requirements.

Strategic Information

Information about the long term aspirations of an organisation compared with the aspirations of the customers that the organisation serves. This is where the focus needs to be on whether we are meeting the types of needs of the types of customers that we serve. This is where all what if type queries will occur in a suitably architected organisation. What if I didn’t have these types of customers? What if I changed the features of the products to create more product types or changed the customer types by educating them in the benefits of the new product types. What if I don’t have the new products or services in time to meet the customer demands?

What we have here is the word Information being overloaded. What this means is that information without context is meaningless. However, when we conduct business analysis we sometimes find that the ‘information’ required by the different recipient types are equally important to them and they have neither knowledge nor care about the information required by the others.

Which is the most important ‘information’ and which should drive the design of the business computer system. Well none of them. They are all as important as each other for their own audience. The issue is how we can satisfy this diverse set of recipients of information from our system adequately to help the business exist, grow and survive.

The trick is to separate the data from the applications and then we can set up a separate channel for the data from that of the applications. Then we can have different islands of data to meet the diverse requirements of the recipients each of them decoupled from each other using appropriate tools and techniques.

Creating Optimised Online Transaction Processing (OLTP) systems

To start the ball rolling we need to make sure that the operational systems are functioning appropriately. This after all is where the data is captured. If you cannot capture the data in a timely fashion then the human element in the system will become disillusioned and switched off. To ensure the delivery of an excellent OLTP system we will be looking at designing highly concurrent SQL Server databases for loosely coupled Applications. This will give us the foundation of a good development. To enhance our understanding we can go further with what’s between index internals, isolation and data consistency.

We can further enhance the design by ensuring that we do not hamper the data by making copies of it within code which prevents it from being used in a flexible way as the business grows. You may have come across the terms coupling and cohesion previously. They are measures of the quality of a system. A highly coupled system will be difficult to maintain as the business expands and changes. Therefore, we need to know which data to store where and how and why it should be manipulated in a particular way. To this end we have two sessions that cover the best types of things to do at the different layers of a system. We have SQLCLR Design for re-use and maintainability and Practical optimization of stored procedures in SQL Server 2005. The first will apprise you of the kind of things that you may want to do in the common language run time. The second will show you how you can get the best performance out of data focussed code. Further improvements are possible because the latest versions of SQL Server include structured exception handling, which separates the normal flow of the code from the error flow. This is detailed within the session Errors and Exceptions in SQL Server 2005.

Having sorted out the data capture we have to think about movement of data from one island to another. This can be done by a set of pre built functional components called SSIS. These allow you to move data around the organisation while applying business rules to the data flows and changing the granularity and structure of the data to move it from operational to Technical to Strategic information. What is more, it also allows you to apply the business rules derived from higher management to the structures that contain the raw data at the operational level thus providing strategic control of the operational processes.

You will learn much about the New features of SSIS in SQL Server version 2008 which facilitate and further enhance what is possible with SSIS in SQL Server 2005, which itself enhanced DTS, which was a much humbler version of the toolset, available in the two previous versions of SQL Server. Before DTS was available you simply had BCP to move data around from system to system or you could spend hundreds of thousands of pounds for some Extraction Transformation and Loading (ETL) software.

Of course this is all very well but how does this help if your systems are already out there and you have to work with what you have got? This is the source of the problem for most people out there. Therefore there is a session on Lessons learned while tuning database systems out there. This session is designed to do more than give you a sense of belonging, having been alienated from your ‘pure’ knowledge by the cold reality of existing systems. It will aim to communicate what works best in given situations that we may come across. Or perhaps that we have already come across where we implemented a solution that is once again in need of a fresh perspective to meet the business requirements.

It is not appropriate to be narrow-minded when thinking about databases. Although it can be appropriate to have a corporate strategy to implement a given RDBMS throughout the organisation. The business world knows that what it survives by, customer service in terms of time, cost and quality, is an equally effective set of forces for service delivery within the organisation. This means that you are likely to come across more and more projects where you are either interfacing with or replacing established platforms. One of the most established platforms to date is ORACLE. This will be served by a session on Lessons learned when migrating complex Oracle Enterprise database systems to SQL Server.

Creating Optimised Online Analytical Processing (OLAP) systems

With all these applications generating data in perfectly organised data structures for OLTP purposes we need to progress onto the other levels of information. Online Analytical Processing is all about looking at data at different levels of detail from different angles. Therefore there is a session on What’s new in Analysis Services 2008. This covers the situation where you want to optimise Business Intelligence using a Data Warehouse – A centralised clean set of information sources. This is fine if you are not constrained by time or cost and want 100% quality information. However, to develop the perfect solutions you need the tools to squeeze out the last bit of information that your requirement for Business Intelligence requires and now you have it.

However, it is rather a constraint to expect developers and users to take on new tools and techniques without adequate reason, it can be unreasonable to expect perfection, I have now learned, but we can always get closer to it than we have been before. And it is this thriving for perfection that makes us look at other ways to put some intelligence into the business functions without the need for Data Warehousing, which can be expensive and a false economy if not done properly. Therefore we have a session on OLAP without a Data Warehouse in SQL Server.

Returning to the issue ensuring minimum coupling and high cohesion in our systems, if one can put the right code in the right place then it is like having a toolbox of useful tools each being used for the right thing. Well since Microsoft put together an OLAP solution they have had a language to decouple the way the data is structured from the way that it is used in the form of a language that looks dangerously like SQL. I say dangerously like because it can lull the unsuspecting individual into thinking it works in the same way as SQL.

However it doesn’t. With SQL the sets are pre-defined and you are basically creating sub-sets and supersets of the predefined sets which are referred to as tables or relations. The rows within the sets are referred to as tuples, a collection of predefined attributes within the set representing the table. With MDX the tuple is one or more members of a set which can be single or multi-dimensional. It can also be disjointed so that you can compare aggregations of subsets of data which correspond along a given axis of the dimensions in question where the sets are non contiguous. There is a session to help you make sense of the way this language works in the shape of the session - Introduction to MDX. If you insist on using only SQL for all your data manipulation needs, then you will want to attend the session on date-time related problems and temporal queries.

Search for the appropriate sessions, as highlighted within the article above, in the document at http://www.devweek.com/sessions/devweek-timetable-2008.pdf

Thursday, January 10, 2008

Attend a free SQL Server 2008 Virtual Event on the 24th January 2008

The nice people at the Professional Association for SQL Server (PASS) and the equally nice people at Solid Quality Learning, have got together with SQL Server Magazine, to 'virtually' give away, to anyone that wishes it, an insight into the changes that will come about for them, as a result of the new features within SQL Server 2008.

If you would like to share the insight then make your way to here to register.

Tuesday, January 01, 2008

Retro Service Manager for SQL Server

Sometimes when learning a new environment it can be easier to leave some things until later.

There are also times when a user maps their processes around the interface presented by a given version of a product and, shockingly, discovers that the interface has changed too substantially to warrant moving onto the new environment.

In these kinds of cases, what can appear like a retrograde step to those forging ahead can actually be the equivalent of a B-Lay for those climbing behind them.

Such a step comes from thoughtful people who provide the anchors that can be used to help others progress up the technological mountain that appears to be growing around them.

One of these steps is re-creating the service manager as it was before SQL Server 2005 came and changed the interface from the way that it had been in the past. If you liked the, what had become common, way for setting the basic features of the services like start, stop and pause. Then some nice people have created the equivalent service manager for post version 2000 versions of SQL Server. Find it at Codeplex