Sunday, December 25, 2011

SQL Server Transaction Isolation Models

Normally, it's best to allow SQL Server to enforce isolation between transactions in its default manner; after all, isolation is one of the basic tenets of the ACID model . However, sometimes business requirements force database administrators to stray from the default behavior and adopt a less rigid approach to transaction isolation. To assist in such cases, SQL Server offers five different transaction isolation models. Before taking a detailed look at SQL Server's isolation models, we must first explore several of the database concurrency issues that they combat:
  • Dirty Reads occur when one transaction reads data written by another, uncommitted, transaction. The danger with dirty reads is that the other transaction might never commit, leaving the original transaction with "dirty" data.
  • Non-repeatable Reads occur when one transaction attempts to access the same data twice and a second transaction modifies the data between the first transaction's read attempts. This may cause the first transaction to read two different values for the same data, causing the original read to be non-repeatable
  • Phantom Reads occur when one transaction accesses a range of data more than once and a second transaction inserts or deletes rows that fall within that range between the first transaction's read attempts. This can cause "phantom" rows to appear or disappear from the first transaction's perspective.
SQL Server's isolation models each attempt to conquer a subset of these problems, providing database administrators with a way to balance transaction isolation and business requirements. The five SQL Server isolation models are:
  • The Read Committed Isolation Model is SQL Server’s default behavior. In this model, the database does not allow transactions to read data written to a table by an uncommitted transaction. This model protects against dirty reads, but provides no protection against phantom reads or non-repeatable reads.
  • The Read Uncommitted Isolation Model offers essentially no isolation between transactions. Any transaction can read data written by an uncommitted transaction. This leaves the transactions vulnerable to dirty reads, phantom reads and non-repeatable reads.
  • The Repeatable Read Isolation Model goes a step further than the Read Committed model by preventing transactions from writing data that was read by another transaction until the reading transaction completes. This isolation model protect against both dirty reads and non-repeatable reads.
  • The Serializable Isolation Model uses range locks to prevent transactions from inserting or deleting rows in a range being read by another transaction. The Serializable model protects against all three concurrency problems.
  • The Snapshot Isolation Model also protects against all three concurrency problems, but does so in a different manner. It provides each transaction with a "snapshot" of the data it requests. The transaction may then access that snapshot for all future references, eliminating the need to return to the source table for potentially dirty data.
If you need to change the isolation model in use by SQL Server, simply issue the command:
SET TRANSACTION ISOLATION LEVEL <level>
where <level> is replaced with any of the following keywords:
  • READ COMMITTED
  • READ UNCOMMITTED
  • REPEATABLE READ
  • SERIALIZABLE
  • SNAPSHOT
Those are the basics of SQL Server transaction isolation.

Thursday, December 22, 2011

Regular Expressions


Symbol Meaning or use 
^              # Assert position at the beginning of the string.
(?:            # Group but don't capture...
ISBN           # Match the text "ISBN".
(?:-1[03])?    # Optionally match the text "-10" or "-13".
:?             # Optionally match a literal ":".
\              # Match a space character (escaped).
)?             # Repeat the group between zero and one time.
(?=            # Assert that the following can be matched here...
[-0-9\ ]{17}$  # Match 17 hyphens, digits, and spaces, then the end
|              # of the string. Or...
[-0-9X\ ]{13}$ # Match 13 hyphens, digits, Xs, and spaces, then the
|              # end of the string. Or...
[0-9X]{10}$    # Match 10 digits and Xs, then the end of the string.
)              # End the positive lookahead.
(?:            # Group but don't capture...
97[89]         # Match the text "978" or "979".
[-\ ]?         # Optionally match a hyphen or space.
)?             # Repeat the group between zero and one time.
[0-9]{1,5}     # Match a digit between one and five times.
[-\ ]?         # Optionally match a hyphen or space.
(?:            # Group but don't capture...
[0-9]+         # Match a digit between one and unlimited times.
[-\ ]?         # Optionally match a hyphen or space.
){2}           # Repeat the group exactly two times.
[0-9X]         # Match a digit or "X".
$              # Assert position at the end of the string.

Monday, December 19, 2011

Agile Development

1. What are the key characteristics of agile testing methodology?
When adopting agile methods for testing, we ensure that the tests are done over every simple and small unit of code that is being developed. This type of testing has lot of advantages in the management and technical aspects of the project. With greater control over the progress that is being made in the development and a demonstratable code at hand every time, the agile methods of testing can prove very useful. The requirements if complex, can be easily matched up at every stage of development by having agile methods of testing over units of code. This will avoid the possibilities of facing requirement conformance mismatches.
2. What are the changes that a development team has to accept if agile testing is on the cards?
The adaptive nature of the development teams play a major role if we have to successfully implement a agile method of testing for the project.
  • Every small change that is being made has to be done in a completely testable form. This means the development of even a very large concept has to be broken up accordingly to facilitate periodic testing.
  • The short iterations of test cycles also would mean that the development team has to come up with frequent builds and versions.
  • When a project is already being tested with other long term methods of testing, a sudden change can be very difficult to be brought in.
  • Unit testing will in turn be a major test of integration for each developer and one has to cautiously integrate the smaller pieces of the unit that have already been tested.
3. In what kind of environments do the Agile testing methodologies prove very successful and where do they dont?
The agile methods of testing can be a very good way of developing a project which has to have a periodic deliverable and which has a very few members in the team with fixed resources. In such scenarios, the Agile methods can help to get the maximum efficiency and stability in the products under development. In such cases the requirements are not so varied and remain quite constant.
On the other hand, if we consider a very large organization which is developing a software or any other product which has lot of modules and lot of functional requirements (business logics) then agile methods may just prove to be a hurdle in the development of different modules. So it is not a good idea to adopt agile testing methods in large projects with a wider scope of operations.
4. How can agile methods help in marketing and business aspects of a product?
The agile methods help in churning out workable code at every stage of development. So marketing of a product can be really effective as the updates to the software/product can come in handy in promotions. The Inspect and adapt approach that agile implements can help the product developers to change/add new functionalities over the previously developed layers. This adaptive approach again can help in keeping the product up-to-date in the market.
5. How can it be advantageous to the developers?
Technically the agile methods implement a iterative and incremental development phase in the project. This means the developers have higher control over the design/code that is under development. This can be helpful in maintaining a confident mindset among the team members. Also the smaller units of development can bring in the much needed modularity in the product, making it more reusable than the traditional methods can offer.




5Qs on Agile with Steve McConnell  
Readers of Software Development magazine once named Steve McConnell one of the three most influential people in the software industry. The CEO and Chief Software Engineer at Construx Software, Steve has generously agreed to kick off our "5Qs on Agile" feature by answering the following five often-asked questions about Agile development.
Q1: Why use Agile methods?
Agile methods focus on shorter iterations, in which the software is brought to a releasable level of quality fairly often, usually somewhere between weekly and monthly. Short iterations provide numerous technical and management benefits. On the technical side, the main benefit is reduced integration risk because the amount of software being integrated is kept small. Short iterations also help to keep quality under control by driving to a releasable state frequently, which prevents a project from accumulating a large backlog of defect correction work. On the management side, the frequent iterations provide frequent evidence of progress, which tends to lead to good status visibility, good customer relations, and good team morale.
Agile methods also usually treat requirements as more dynamic than traditional methods do. For some environments that's a plus and for some it's a minus. If you're working in an environment that doesn't need a lot of long range predictability in its feature set, treating requirements dynamically can save a lot of detailed requirements specification work and avoid the "requirements spoilage" that often goes along with working through a lengthy backlog of detailed requirements.
Q2: What is the biggest challenge when implementing Agile methods? The biggest challenge we see in our consulting and training business is walking the walk. You can't just say you're doing Agile. You have to follow through with specific actions. Of course that's the same problem we saw years ago with object oriented methods, and before that with structured methods, so that problem isn't unique to Agile.
The most common specific challenges we see are simply the challenges of "turning the battleship" in a large organization to overcome the inertia of entrenched work practices and expectations and getting reoriented to do things in a different way. You have to muster the resolve to actually work in short iterations. You have to build frequently, at least every day, and you have to develop the discipline to keep the build healthy. You have to push each iteration to a releasable level of quality even if that's hard to do at first. As before, this problem isn't unique to Agile. If we're working with an organization and find that their biggest need is to do a better job of defining requirements up front (which isn't very agile), "turning the battleship" to define better requirements up front will be just as hard.
Q3: In what environments will Agile be most successful? Full-blown Agile seems to me to be best suited for environments in which the budget is fixed on an annual basis, team sizes are fixed on an annual basis (because of the budget), and the project staff's mission is to deliver the most valuable business functionality that they can deliver in a year's time with a fixed team size. This mostly describes in-house, business systems dev organizations.
Full-blown agile (especially the flexible requirements part) is less-well suited to companies that sell software, because maintaining a lot of requirements flexibility runs counter to the goal of providing mid-term and long-term predictability. We've found that many organizations value predictability more than they value requirements flexibility. That is, they value the ability to make commitments to key customers or to the marketplace more than they value the ability to "respond to change."
For anything less than full-blown Agile, however, we find that many agile practices are well-suited to the vast majority of environments. Short development iterations are nearly always valuable, regardless of whether you define 5% of your requirements up front or 95% up front. Keeping the software close to a releasable level of quality at all times is virtually always valuable. Scrum as a management style and discipline seems to be very broadly applicable. Small, empowered teams are nearly always useful. I go into more detail on the detailed strengths and weaknesses of specific agile development practices in my executive presentation on The Legacy of Agile Development.
Q4: What is the future of Agile? Agile has largely become a synonym for "modern software practices that work," so I think the future of Agile with a capital "A" is the same as the past of Object Oriented or Structured. We rarely talk about Object Oriented programming anymore; it's just programming. Similarly, I think Agile has worked its way into the mainstream such that within the next few years we won't talk about Agile much anymore; we'll just talk about programming, and it will be assumed that everyone means Agile whenever that's appropriate.
Q5: Can you recommend a book, blog, podcast, Web site, or other information source to our readers that you find interesting or intriguing right now? I'm most excited about the Software Development Best Practices discussion forum that we launched a few weeks ago. That's at http://forums.construx.com/ . I also started blogging recently, and you can read my blog at http://blogs.construx.com/blogs/stevemcc/default.aspx . Feel free to contact me by e-mail at stevemcc@construx.com.

Thursday, December 15, 2011

How Wcf is more secure than Websevices?

Following security used in WCF

Transfer security:- Responsible for providing message confidentiality, data integrity, and authentication of communicating parties.

Authorization:- Responsible for providing a framework for making authorization decisions.

Auditing:- Responsible for logging security-related events to the audit log.

For more Details:-
http://wcfsecurityguide.codeplex.com/wikipage?title=Ch%2004%20-%20WCF%20Security%20Fundamentals&referringTitle=Home


Security
WSE 3.0 Web services that are secured using a policy file

WCF services can use a configuration file to secure a service and that mechanism is similar to a WSE 3.0 policy file. In WSE 3.0 when securing a Web service using a policy file, you use either a turnkey security assertion or a custom policy assertion. The turnkey security assertions map closely to the authentication mode of a WCF security binding element. Not only are the WCF authentication modes and WSE 3.0 turnkey security assertions named the same or similarly, they secure the messages using the same credential types. For instance, the usernameForCertificate turnkey security assertion in WSE 3.0 maps to the UsernameForCertificate authentication mode in WCF. The following code examples demonstrate how a minimal policy that uses the usernameForCertificate turnkey security assertion in WSE 3.0 maps to a UsernameForCertificate authentication mode in WCF in a custom binding.

WSE 3.0
Copy

<policies>
<policy name="MyPolicy">
<usernameForCertificate messageProtectionOrder="SignBeforeEncrypt"
requireDeriveKeys="true"/>
</policy>
</policies>

WCF
Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="UserNameForCertificate" 
messageProtectionOrder="SignBeforeEncrypt"
requireDerivedKeys="true"/>
</binding>
</customBinding>

To migrate the security settings of a WSE 3.0 Web service that are specified in a policy file to WCF, a custom binding must be created in a configuration file and the turnkey security assertion must be set to its equivalent authentication mode. Additionally, the custom binding must be configured to use the August 2004 WS-Addressing specification when WSE 3.0 clients communicate with the service. When the migrated WCF service does not require communication with WSE 3.0 clients and must only maintain security parity, consider using the WCF system-defined bindings with appropriate security settings instead of creating a custom binding.

The following table lists the mapping between a WSE 3.0 policy file and the equivalent custom binding in WCF.
WSE 3.0 Turnkey Security Assertion WCF custom binding configuration

<usernameOverTransportSecurity />

Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="UserNameOverTransport" />
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

<mutualCertificate10Security />

Copy

<customBinding>
<binding name="MyBinding">
<security messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" authenticationMode="MutualCertificate" />
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

<usernameForCertificateSecurity />

Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="UsernameForCertificate"/>
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

<anonymousForCertificateSecurity />

Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="AnonymousForCertificate"/>
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

<kerberosSecurity />

Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="Kerberos"/>
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

<mutualCertificate11Security />

Copy

<customBinding>
<binding name="MyBinding">
<security authenticationMode="MutualCertificate"/>
<textMessageEncoding messageVersion="Soap12WSAddressingAugust2004" />
</binding>
</customBinding>

For more information about creating custom bindings in WCF, see Custom Bindings.
WSE 3.0 Web services that are secured using application code

Whether WSE 3.0 or WCF is used, the security requirements can be specified in application code instead of in configuration. In WSE 3.0, this is accomplished by creating a class that derives from the Policy class and then by adding the requirements by calling the Add method. For more details about specifying the security requirements in code, see How to: Secure a Web Service Without Using a Policy File. In WCF, to specify security requirements in code, create an instance of the BindingElementCollection class and add an instance of a SecurityBindingElement to the BindingElementCollection. The security assertion requirements are set using the static authentication mode helper methods of the SecurityBindingElement class. For more details about specifying security requirements in code using WCF, see How to: Create a Custom Binding Using the SecurityBindingElement and How to: Create a SecurityBindingElement for a Specified Authentication Mode.
WSE 3.0 Custom Policy Assertion

In WSE 3.0 there are two types of custom policy assertions: those that secure a SOAP message and those that do not secure a SOAP message. Policy assertions that secure SOAP messages derive from WSE 3.0 SecurityPolicyAssertion class and the conceptual equivalent in WCF is the SecurityBindingElement class.

An important point to note is that the WSE 3.0 turnkey security assertions are a subset of the WCF authentication modes. If you have created a custom policy assertion in WSE 3.0, there may be an equivalent WCF authentication mode. For example, WSE 3.0 does not provide a CertificateOverTransport security assertion that is the equivalent to UsernameOverTransport turnkey security assertion, but uses an X.509 certificate for client authentication purposes. If you have defined your own custom policy assertion for this scenario, WCF makes the migration straightforward. WCF defines an authentication mode for this scenario, so you can take advantage of the static authentication mode helper methods to configure a WCF SecurityBindingElement.

When there is not a WCF authentication mode that is equivalent to a custom policy assertion that secures SOAP messages, derive a class from TransportSecurityBindingElement, SymmetricSecurityBindingElement or AsymmetricSecurityBindingElement WCF classes and specify the equivalent binding element. For more details, see How to: Create a Custom Binding Using the SecurityBindingElement.

To convert a custom policy assertion that does not secure a SOAP message, see Filtering and the sample Custom Message Interceptor.
WSE 3.0 Custom Security Token

The WCF programming model for creating a custom token is different than WSE 3.0. For details about creating a custom token in WSE, see Creating Custom Security Tokens. For details about creating a custom token in WCF, see How to: Create a Custom Token.


http://www.codeproject.com/KB/webservices/SOAPHeaderAuthentication.aspx

Simple Sample with Entity Framework


Simple Sample with Entity Framework


Introduction

This sample shows how you can use the entity framework in an application ASP.NET with an architecture in layers. This is useful for applications that are pretty small and fast. The code uses selectupdatecreateand delete functions. You can use this example with stored procedures or without this. This sample is aimed at newcomers to EF.
The first context is complex and that creates some objects in memory, on the other hand I had to send objects to be processed. There was no need for stored entity framework to be a key, otherwise errors would be raised.

Database

First view the structure of the database. This example uses two tables, i.e., Customers and Category.Customers have a relation with Category.


Project

The project has a structure in three layers. The layer for business that contains the project entities, the project components, the layer of data and the layer of presentation.


Business Layer

The Solution.Bussines.Components layer manages the logic of business and joins the layer of data with the layer of entities.
This layer is composed by the class CustomersComponent.cs which calls the layer of data. Here you can add more logic code for your business.


Data Layer

The data layer contains the logic for accessing the database.
Insert a new record in the database. This method uses the customer created previously for adding to context and saving the changes in the database.


Presentation Layer

In the presentation layer built in ASP.NET, we have the form for update or add records, the principal method sends to the layer components the action for that to execute in the database. This first fills the objectcustomer with the form information and the next calls Update or Add.





Tuesday, December 6, 2011

jQuery Selectors


jQuery Element Selectors

jQuery uses CSS selectors to select HTML elements.
$("p") selects all <p> elements.
$("p.intro") selects all <p> elements with class="intro".
$("p#demo") selects all <p> elements with id="demo".

jQuery Attribute Selectors

jQuery uses XPath expressions to select elements with given attributes.
$("[href]") select all elements with an href attribute.
$("[href='#']") select all elements with an href value equal to "#".
$("[href!='#']") select all elements with an href attribute NOT equal to "#".
$("[href$='.jpg']") select all elements with an href attribute that ends with ".jpg".


jQuery CSS Selectors

jQuery CSS selectors can be used to change CSS properties for HTML elements.
The following example changes the background-color of all p elements to yellow:


 $("p").css("background-color","yellow");

Some More Examples

Syntax Description
$(this) Current HTML element
$("p") All <p> elements
$("p.intro") All <p> elements with class="intro"
$("p#intro") All <p> elements with id="intro"
$("p#intro:first") The first <p> element with id="intro"
$(".intro") All elements with class="intro"
$("#intro") The first element with id="intro"
$("ul li:first") The first <li> element of each <ul>
$("[href$='.jpg']") All elements with an href attribute that ends with ".jpg"
$("div#intro .head") All elements with class="head" inside a <div> element with id="intro"
 


LINQ to SQL (Part 1)


        Using LINQ to SQL (Part 1)

Over the last few months I wrote a series of blog posts that covered some of the new language features that are coming with the Visual Studio and .NET Framework "Orcas" release.  Here are pointers to the posts in my series:
The above language features help make querying data a first class programming concept.  We call this overall querying programming model "LINQ" - which stands for .NET Language Integrated Query.
Developers can use LINQ with any data source.  They can express efficient query behavior in their programming language of choice, optionally transform/shape data query results into whatever format they want, and then easily manipulate the results.  LINQ-enabled languages can provide full type-safety and compile-time checking of query expressions, and development tools can provide full intellisense, debugging, and rich refactoring support when writing LINQ code.
LINQ supports a very rich extensibility model that facilitates the creation of very efficient domain-specific operators for data sources.  The "Orcas" version of the .NET Framework ships with built-in libraries that enable LINQ support against Objects, XML, and Databases.

What Is LINQ to SQL?

LINQ to SQL is an O/RM (object relational mapping) implementation that ships in the .NET Framework "Orcas" release, and which allows you to model a relational database using .NET classes.  You can then query the database using LINQ, as well as update/insert/delete data from it.
LINQ to SQL fully supports transactions, views, and stored procedures.  It also provides an easy way to integrate data validation and business logic rules into your data model.

Modeling Databases Using LINQ to SQL:

Visual Studio "Orcas" ships with a LINQ to SQL designer that provides an easy way to model and visualize a database as a LINQ to SQL object model.  My next blog post will cover in more depth how to use this designer (you can also watch this video I made in January to see me build a LINQ to SQL model from scratch using it). 
Using the LINQ to SQL designer I can easily create a representation of the sample "Northwind" database like below:

My LINQ to SQL design-surface above defines four entity classes: Product, Category, Order and OrderDetail.  The properties of each class map to the columns of a corresponding table in the database.  Each instance of a class entity represents a row within the database table.
The arrows between the four entity classes above represent associations/relationships between the different entities.  These are typically modeled using primary-key/foreign-key relationships in the database.  The direction of the arrows on the design-surface indicate whether the association is a one-to-one or one-to-many relationship.  Strongly-typed properties will be added to the entity classes based on this.  For example, the Category class above has a one-to-many relationship with the Product class.  This means it will have a "Categories" property which is a collection of Product objects within that category.  The Product class then has a "Category" property that points to a Category class instance that represents the Category to which the Product belongs.
The right-hand method pane within the LINQ to SQL design surface above contains a list of stored procedures that interact with our database model.  In the sample above I added a single "GetProductsByCategory" SPROC.  It takes a categoryID as an input argument, and returns a sequence of Product entities as a result.  We'll look at how to call this SPROC in a code sample below.
Understanding the DataContext Class
When you press the "save" button within the LINQ to SQL designer surface, Visual Studio will persist out .NET classes that represent the entities and database relationships that we modeled.  For each LINQ to SQL designer file added to our solution, a custom DataContext class will also be generated.  This DataContext class is the main conduit by which we'll query entities from the database as well as apply changes.  The DataContext class created will have properties that represent each Table we modeled within the database, as well as methods for each Stored Procedure we added.
For example, below is the NorthwindDataContext class that is persisted based on the model we designed above:

LINQ to SQL Code Examples

Once we've modeled our database using the LINQ to SQL designer, we can then easily write code to work against it.  Below are a few code examples that show off common data tasks:

1) Query Products From the Database

The code below uses LINQ query syntax to retrieve an IEnumerable sequence of Product objects.  Note how the code is querying across the Product/Category relationship to only retrieve those products in the "Beverages" category:
C#:

VB:

2) Update a Product in the Database

The code below demonstrates how to retrieve a single product from the database, update its price, and then save the changes back to the database:
C#:

VB:

Note: VB in "Orcas" Beta1 doesn't support Lambdas yet.  It will, though, in Beta2 - at which point the above query can be rewritten to be more concise.

3) Insert a New Category and Two New Products into the Database

The code below demonstrates how to create a new category, and then create two new products and associate them with the category.  All three are then saved into the database.
Note below how I don't need to manually manage the primary key/foreign key relationships. Instead, just by adding the Product objects into the category's "Products" collection, and then by adding the Category object into the DataContext's "Categories" collection, LINQ to SQL will know to automatically persist the appropriate PK/FK relationships for me. 
C#

VB:

4) Delete Products from the Database

The code below demonstrates how to delete all Toy products from the database:
C#:

VB:

5) Call a Stored Procedure

The code below demonstrates how to retrieve Product entities not using LINQ query syntax, but rather by calling the "GetProductsByCategory" stored procedure we added to our data model above.  Note that once I retrieve the Product results, I can update/delete them and then call db.SubmitChanges() to persist the modifications back to the database.
C#:

VB:

6) Retrieve Products with Server Side Paging

The code below demonstrates how to implement efficient server-side database paging as part of a LINQ query.  By using the Skip() and Take() operators below, we'll only return 10 rows from the database - starting with row 200.
C#:

VB:

Summary

LINQ to SQL provides a nice, clean way to model the data layer of your application.  Once you've defined your data model you can easily and efficiently perform queries, inserts, updates and deletes against it. 
Hopefully the above introduction and code samples have helped whet your appetite to learn more.  Over the next few weeks I'll be continuing this series to explore LINQ to SQL in more detail.

Difference between List Of and Collection Of

List<T> inherits from Collection<T>. So they have a lot of the same functionality. List just extends it by adding members and methods to make things easier, like ForEach, TrimExcess, and ConvertAll. Collection<T> is used as the base class for all generic collections in the 2.0 Framework. You would almost always use List. The only time you would use Collection is to make your own class and use Collection as the base.

Friday, December 2, 2011

SQL server Paging


select * from
( SELECT   ROW_NUMBER() OVER (ORDER BY uniqueid) as ROW , * from TableName where status = 2 ) as a
where ROW BETWEEN 1 and 10

How to rename a SQL Server database


SolutionThere are a few ways of doing this.
Option 1
Use the following T-SQL: command to make the database name change.
EXEC sp_renamedb 'oldName', 'newName'
This command works for both SQL Server 2000 and SQL Server 2005, but this feature will not be supported in future versions of SQL Server. The new command that should be used for SQL Serer 2005 and beyond is:
ALTER DATABASE oldName MODIFY NAME = newName
Option 2
If you are using SQL Server Management Studio, right click on the database name and select the new option "rename". This did not exist in SQL Server 2000, but if you use Management Studio to manage your SQL Server 2000 server you can take advantage of this option for your SQL Server 2000 instances.
Option 3
Use the detach and attach feature of SQL Server to detach the database first and when you reattach the database you give the database a different name. This can be done by using the GUI or you can do this by using the following commands:
EXEC sp_detach_db 'oldName', 'true'
EXEC sp_attach_db @dbname = N'newName', @filename1 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\pubs.mdf', @filename2 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\pubs_log.ldf'
Here we are detaching database "Test"


Here we are reattaching database "Test"

But before we finish we change the name of the database to "Test2".


One thing to note is by changing the name of the database using one of these techniques you are only renaming the database.  The physical files still have the same names, so if you want to also change the name of the files the simplest approach is to use Option 3.  Before you reattach the files you need to first change the name of the physical files and then when you do the reattach you can specify the renamed files.
Next Steps
  • In addition to changing the names of the databases you also need to check to see if there are any references in your application code to the database name.  This can be either within SQL Server or outside of SQL Server.  By making this name change, nothing else is changed so you may need to do a lot of additional work to change a database name.
  • See if there are databases that could use a name change and plan the change process to take advantage of a more meaningful database name.

Efficiently Paging Through Large Result Sets in SQL Server 2000


Introduction
When displaying large amounts of data to a user, it's important that the information be presented in digestible chunks rather than slamming it all down their throats at once. For example, searching for "Microsoft" on Google returns approximately 2,190,000,000 results, but thankfully Google only shows me ten of those at a time. When providing paging access to particularly large result sets - tables with tens of thousands, hundreds of thousands, or millions of records - care must be taken in querying the data such that only the particular page of data requested is returned.
Both the DataGrid in ASP.NET 1.x and GridView in ASP.NET 2.0 providing two paging flavors:
  • Default paging - easy to implement but naively retrieves all records from the database and then trims the results to show only those for the requested page
  • Custom paging - requires the developer to correctly retrieve only the precise subset of records to show for the current page; requires more effort to implement
While default paging is quick and easy to implement, for sufficiently-sized result sets custom paging is a must. In March 2006 I authored a couple of articles here on 4Guys on how to efficiently page through large result sets using custom paging in the GridView with the ObjectDataSource and SQL Server 2005's new ROW_NUMBER() keyword. This new keyword simplifies efficiently retrieving a particular subset of data ordered by row number. After authoring these articles, I received many requests from readers for a look at how to implement this type of efficient custom paging using SQL Server 2000 (which lacks the ROW_NUMBER() keyword).
A previous article on 4Guys, Paging through Records using a Stored Procedure by Daniel Anderson, provides one approach that far outperforms the default paging implementation. However, it has a few areas that can be updated to improve the performance. This article looks at updating Daniel's stored procedure to provide an even-more efficient approach. The stored procedure presented at the end of this article can be used for classic ASP applications, custom paging with the DataGrid in ASP.NET 1.x, or used by the ObjectDataSource to provide custom paging for the GridView in ASP.NET 2.0 applications. Read on to learn more!


Improving the Stored Procedure's Performance
Daniel's approach is costly in part because it dumps all of the columns from the table to be paged through into the temp table. This requires a complete table scan on the data to be paged through. Instead, we can create the table variable to hold only the primary key column(s) of the table to be paged through. Since these records likely exist in a clustered index, populating the table variable likely be done entirely from memory, without needing to hit the disk.
The remaining columns in the table whose data is being paged through can be accessed and returned in the stored procedure's final query, which grabs the particular subset of records from the table variable. After modernizing and optimizing Daniel's original stored procedure we end up with:

CREATE PROCEDURE PagedResults_New
(
   @startRowIndex      int,
   @maximumRows      int
)
AS


--Create a table variable
DECLARE @TempItems TABLE
(
   ID int IDENTITY,
   EmployeeID int
)

-- Insert the rows from tblItems into the temp. table
INSERT INTO @TempItems (EmployeeID)
SELECT EmployeeID
FROM Employees

-- Now, return the set of paged records
SELECT e.*, d.[Name] as DepartmentName
FROM @TempItems t
   INNER JOIN Employees e ON
      e.EmployeeID = t.EmployeeID
   INNER JOIN Departments d ON
      d.DepartmentID = e.DepartmentID
WHERE ID BETWEEN @startRowIndex AND (@startRowIndex + @maximumRows) - 1
GO



Comparing the Performance of the Old and New Approach
To ascertain the improvements in performance by restricting the table variable to only including the primary key column(s) from the table to be paged through I used SQL Profiler and compared the different queries on the Employees table with different record counts. Keep in mind that these results are highly unscientific. I ran each of them numerous times to "warm them up" and then recorded nine test runs and computed the average. But still, I was doing this on my personal machine while running a variety of other programs. Additionally, I did not simulate any load that multiple, concurrent visitors to a website would incur. The results follow and are also avaiable at the end of this article as an Excel spreadsheet. In the New stored procedure results, the percentage improvement over the old approach is included for reference:

 Records
in
Employees
Avg. ReadsAvg. Duration
(in milliseconds)
Old50,00055,360425
 100,000110,492850
 200,000220,7261,567
New50,00051,637
6.7% improvement
307
27.7% improvement
 100,000103,028
6.7% improvement
536
36.9% improvement
 200,000205,876
6.7% improvement
1,033
34.1% improvement

During my testing I wondered if specifying the table variable's ID column as a primary key would boost performance. Marking the IDcolumn as a primary key would create a clustered index, meaning that the results would be sorted by the ID column and the index could be scanned when finding correct subset of records in the final query. However, the index would incur additional overhead when dumping the contents of the table to be paged through into the table variable. The spreadsheet at the end of this article shows the numbers, but in my tests marking ID column as a primary key exhibited worse performance than not, although it was still better than the "old" approach of dumping the entire table contents into the temp table (the "old" approach).
While this approach is, perhaps, acceptable for paging through large result sets in SQL Server 2000, as illustrated in Custom Paging in ASP.NET 2.0 with SQL Server 2005, SQL Server 2005's ROW_NUMBER() technique affords much better performance, taking on average a mere 3 milliseconds on average to return 10 records from a total table of 50,000. Compare that to the approach examined in this article, which consumed 307 milliseconds - two orders of magnitude worse!
Before you swear off the approach discussed in this article, keep in mind that bringing back all the records takes considerably longer than dumping them into a table variable. From the Custom Paging in ASP.NET 2.0 with SQL Server 2005 article, paging through 50,000 records using the GridView's default paging, which retrieves all records from the Employees table, took on average 1,411 milliseconds. Also, since the ROW_NUMBER() option is a feature new to SQL Server 2005, if you're still using SQL Server 2000 you're stuck with the table variable technique.
Conclusion
In this article we looked at how to retrieve a specific page of data from a table using a stored procedure. This technique is useful for providing efficient, custom paging through large result sets with the ASP.NET 1.x DataGrid and ASP.NET 2.0 GridView. This technique can also be used in classic ASP applications or any other data-driven application that needs to work with paged data. If you are using SQL Server 2005, you should instead use the ROW_NUMBER() keyword as discussed in Custom Paging in ASP.NET 2.0 with SQL Server 2005.

http://www.4guysfromrolla.com/webtech/041206-1.shtml


Thursday, December 1, 2011

What is the advantage of using WCF over other distributed programming models like Web Services(ASMX), .NET Remoting, Enterprise Services stack etc.?

To understand the advantage of using WCF over other distributed programming models like Web Services(ASMX), .NET Remoting, Enterprise Services stack etc, let's consider the following scenario. We have developed an application using web services. As we know web services use HTTP protocl and XML SOAP formatted messages, they are good for developing interoperable applications in a heterogeniuos environment. We have a new client. Our new client is using .NET and he wants binary formmatted messages over TCP protocol, because interoperability is not a concern and binary formmatted messages over TCP protocol are much faster than XML SOAP formmatted messages over HTTP. To satisfy the requirement of this client, now we cannot use our existing web service. So, we have to develop a brand new remoting application from the scratch. The business functionality is the same in web services and remoting application. Since our different clients have different requirements, we ended up creating the same business application using web services and remoting technologies. This approach has several disadvantages as listed below.
1. Developers have to be familiar with two different technologies (Web Services and Remoting).
2. We end up creating duplicate business applications with different technologies which also leads to maintainance overhead.


On the other hand WCF unifies Web Services, .NET Remoting, and Enterprise Services stacks under one roof. For the same requirement that we have seen untill now, we just create one application and expose multiple end points to satisfy the requirements of multiple clients. In WCF configuration drives protocol choices, messaging formats, process allocation, etc. WCF services are loosely coupled, meaning that a WCF service is not bound to a particular protocol, encoding format, or hosting environment. Everything is configurable.