Running Visual Studio 2010


When you launch Visual Studio the Microsoft Visual Studio 2010 splash screen appears.
Like a lot of splash screens, it provides information about the version of the product and to whom it has been licensed, as shown in Figure given below


Running Visual Studio 2010


The first time you run Visual Studio 2010, you see the splash screen only for a short period before you are prompted to select the default environment settings. It may seem unusual to ask those who haven ’ t used a product before how they imagine themselves using it. Because Microsoft has consolidated a number of languages and technologies into a single IDE, that IDE must account for the subtle (and sometimes not so subtle) differences in the way developers work.



If you take a moment to review the various options in this list, as shown in Figure below you will fi nd that the environment settings that are affected include the position and visibility of various windows, menus, and toolbars, and even keyboard shortcuts. For example, if you select the General Development Settings option as your default preference, this screen describes the changes that willbe applied. Next covers how you can change your default environment settings at a later stage.


Running Visual Studio 2010


A tip for Visual Basic .NET developers coming from previous versions of Visual
Studio is that they should not use the Visual Basic Development Settings option.This option has been configured for VB6 developers and will only infuriate Visual Basic .NET developers, because they will be used to different shortcut key mappings. We recommend that you use the general development settings,because these will use the standard keyboard mappings without being geared toward another development language.


Running Visual Studio 2010


Regardless of the environment settings you selected, you see the Start Page in the center of the screen.However, the contents of the Start Page and the surrounding tool bars and tool windows can vary.



Before you launch into building your first application, it’s important to take a step back and look at the components that make up the Visual Studio 2010 IDE. Menus and toolbars are positioned along the top of the environment, and a selection of sub windows, or panes, appears on the left,right, and bottom of the main window area. In the center is the main editor space: whenever you open a code file, an XML document, a form, or some other fi le, it appears in this space for editing.With each file you open, a new tab is created so that you can toggle among opened files. On either side of the editor space is a set of tool windows: these areas provide additional contextual information and functionality. In the case of the general developer settings, the default layout includes the Solution Explorer and Class View on the right, and the Server Explorer and Toolbox on the left. The tool windows on the left are in their collapsed, or unpinned , state. If you click a tool window’s title, it expands; it collapses again when it no longer has focus or you move the cursor to another area of the screen. When a tool window is expanded, you see a series of three icons at the top right of the window, similar to those shown in the left image of Figure below.


Running Visual Studio 2010


If you want the tool window to remain in its expanded, or pinned , state, you can click the middle icon, which looks like a pin. The pin rotates 90 degrees to indicate that the window is now pinned.Clicking the third icon, the X, closes the window. If later you want to reopen this or another tool window, you can select it from the View menu.



The right image in Figure above shows the context menu that appears when the fi rst icon, the down arrow, is clicked. Each item in this list represents a different way of arranging the tool window. As you would imagine, the Float option allows the tool window to be placed anywhere on the screen,independent of the main IDE window. This is useful if you have multiple screens, because you can move the various tool windows onto the additional screen, allowing the editor space to use the maximum screen real estate. Selecting the Dock as Tabbed Document option makes the tool window into an additional tab in the editor space.

Installing Visual Studio 2010


When you launch Visual Studio 2010 setup, you see the dialog in Figure 1 - 1 showing you the three product installation stages. As you would imagine, the fi rst stage is to install the product itself. The other two stages are optional. You can either install the product documentation locally, or use the online (and typically more up - to - date) version. It is recommended that you do search for service releases because it ensures you are working with the most recent version of the product and associated tools.


Installing Visual Studio 2010


As you progress through the setup process you are prompted to provide feedback to Microsoft (left image, Figure 1-2) and agree to the licensing terms for the product.


Installing Visual Studio 2010


The Visual Studio 2010 setup process has been optimized for two general categories of developers: those writing managed, or .NET, applications, and those writing native, or C++, applications (left image, Figure below. The Customize button allows you to select components from the full component tree as shown in the right image of Figure below.


Installing Visual Studio 2010


Once you have selected the components you want to install, you see the updated progress dialog in the left image of Figure below. Depending on which components you already have installed on your computer, you may be prompted to restart your computer midway through the installation process.When all the components have been installed, you see the setup summary dialog in the right image of Figure below. You should review this to ensure that no errors were encountered during installation.


Installing Visual Studio 2010

ACID properties

ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee database transactions are processed reliably

Atomicity

Atomicity requires that database modifications must follow an "all or nothing" rule. Each transaction is said to be atomic if when one part of the transaction fails, the entire transaction fails and database state is left unchanged. It is critical that the database management system maintains the atomic nature of transactions in spite of any application, DBMS (Database Management System), operating system or hardware failure.

An atomic transaction cannot be subdivided, and must be processed in its entirety or not at all. Atomicity means that users do not have to worry about the effect of incomplete transactions.

Transactions can fail for several kinds of reasons:


  • Hardware failure: A disk drive fails, preventing some of the transaction's database changes from taking effect
  • System failure: The user loses their connection to the application before providing all necessary information
  • Database failure: E.g., the database runs out of room to hold additional data
  • Application failure: The application attempts to post data that violates a rule that the database itself enforces, such as attempting to insert a duplicate value in a column.

Eg:

An example of an atomic transaction is an account transfer transaction. The money is removed from account A then placed into account B. If the system fails after removing the money from account A, then the transaction processing system will put the money back into account A, thus returning the system to its original state. This is known as a rollback

Consistancy


The consistency property ensures that the database remains in a consistent state; more precisely, it says that any transaction will take the database from one consistent state to another consistent state.Looking again at the account transfer system, the system is consistent if the total of all accounts is constant. If an error occurs and the money is removed from account A and not added to account B, then the total in all accounts would have changed. The system would no longer be consistent. By rolling back the removal from account A, the total will again be what it should be, and the system back in a consistent state.



The consistency property does not say how the DBMS should handle an inconsistency other than ensure the database is clean at the end of the transaction. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction could be rolled back to the pre-transactional state - or it would be equally valid for the DBMS to take some patch-up action to get the database in a consistent state. Thus, if the database schema says that a particular field is for holding integer numbers, the DBMS could decide to reject attempts to put fractional values there, or it could round the supplied values to the nearest whole number: both options maintain consistency.



Isolation


The isolation portion of the ACID properties is needed when there are concurrent transactions. Concurrent transactions are transactions that occur at the same time, such as shared multiple users accessing shared objects. This situation is illustrated at the top of the figure as activities occurring over time. The safeguards used by a DBMS to prevent conflicts between concurrent transactions are a concept referred to as isolation.



As an example, if two people are updating the same catalog item, it's not acceptable for one person's changes to be "clobbered" when the second person saves a different set of changes. Both users should be able to work in isolation, working as though he or she is the only user. Each set of changes must be isolated from those of the other users.


isolation


An important concept to understanding isolation through transactions is serializability. Transactions are serializable when the effect on the database is the same whether the transactions are executed in serial order or in an interleaved fashion. As you can see at the top of the figure, Transactions 1 through Transaction 3 are executing concurrently over time. The effect on the DBMS is that the transactions may execute in serial order based on consistency and isolation requirements. If you look at the bottom of the figure, you can see several ways in which these transactions may execute. It is important to note that a serialized execution does not imply the first transactions will automatically be the ones that will terminate before other transactions in the serial order.



Durability


A transaction is durable in that once it has been successfully completed, all of the changes it made to the system are permanent. There are safeguards that will prevent the loss of information, even in the case of system failure. By logging the steps that the transaction performs, the state of the system can be recreated even if the hardware itself has failed. The concept of durability allows the developer to know that a completed transaction is a permanent part of the system, regardless of what happens to the system later on.Durability refers to the ability of the system to recover committed transaction updates if either the system or the storage media fails.

Features to consider for durability:

  • recovery to the most recent successful commit after a database software failure
  • recovery to the most recent successful commit after an application software failure
  • recovery to the most recent successful commit after a CPU failure
  • recovery to the most recent successful backup after a disk failure
  • recovery to the most recent successful commit after a data disk failure

Database Management System

What is DBMS?


  • A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database.
  • Data base management system is the system in which related data is stored in an "efficient" and "compact" manner. Efficient means that the data which is stored in the DBMS is accessed in very quick time and compact means that the data which is stored in DBMS covers very less space in computer's memory. In above definition the phrase "related data" is used which means that the data which is stored in DBMS is about some particular topic.



Components of DBMS


Database Management System



Transaction Management

  • A transaction is a sequence of database operations that represents a logical unit of work and that accesses a database and transforms it from one state to another.
  • A transition can update a record, delete (or) modify a set of records etc.

Concurrency control

Concurrency control is the database management activity of co-ordinating the actions of database manipulating process that separate concurrently that access shared data and can potentially interfere with one another.

Recovery Management

The recovery management system in a database ensures that the aborted or failed transactions create non adverse effect on the database or the other transitions.

Security Management

Security refers to the protection of data against unauthorized access. Security mechanism of a DBMS make sure that only authorized users are given access to the data in the database

Language Interface

The DBMS provides support languages used for the definition and manipulation of data in the database. The data structures are created using the data definition language commands. The data manipulation is done using the data manipulation commands.

Storage Management

The DBMS provides a mechanism for management of permanent storage of the data. The internal schema defines how the data should be stored by the storage management mechanism and this storage manager interface with the operating system to access the physical storage.

Data Catalog Management

Data catalog or Data Dictionary is a system database that contains descriptions of the data in the database (metadata). If contains information about data, relationships, constraints and the entire schema that organize these features in to a unified database.


Need For DBMS

  • Data independence and efficient access.
  • Reduced application development time.
  • Data integrity and security.
  • Uniform data administration.
  • Concurrent access, recovery from crashes.

Introduction To DatabaseManagementSystem


A database consists of four elements as given



  1. Data

  2. Relationship

  3. Schema

  4. Constraints



Introduction To DatabaseManagementSystem



Data

  • Data are binary computer representations of stored logical entities.
  • Software is divided into two general categories-data and programs
  • A program is a collection of instruction for manipulating data
  • Data exist is various forms- as numbers tents on pieces of paper,as bits and bytes stored in electronic memory or as facts stored in a persons mind


Relationship


  • Relationships explain the correspondence between various data elements

Constraints


  • Are predicates that define correct database states




Scheme


  • Schema describes the organization of data and relationships within the database
  • Schema defines various views of the database for the use of various system components of the database management system and for the application’s security
  • A schema separates physical aspects of data storage form the logical aspects of data representation
  • In database management systems data files are the files that store the database information whereas offer files, such as index files and data dictionaries, store administrative information known as metadata.

  • Data base are organized by fields, records and files.
  1. Fields: is a single piece of information.

  2. Record: is one complete set of fields.

  3. File: is a collection of records


Types of schema

  1. Internal schema: defines how and where the data are
    organized in physical data storage.

  2. Conceptual schema: defines the stored data structures in
    terms of the database model used.

  3. External schema: defines a view (or) views of the database
    for particular uses

Open Database Connectivity (ODBC)


Open Database Connectivity (ODBC) helped address the problem of needing to know the details of each DBMS used. ODBC provides a single interface for accessing a number of database systems. To accomplish this, ODBC provides a driver model for accessing data. Any database provider can write a driver for ODBC to access data from their database system. This enables developers to access that database through the ODBC drivers instead of talking directly to the database system. For data sources such as files, the ODBC driver plays the role of the engine, providing direct access to the data source. In cases where the ODBC driver needs to connect to a database server, the ODBC driver typically acts as a wrapper around the API exposed by the database server.



With this model, developers move from one DBMS to another and use many of the skills they have already acquired. Perhaps more important, a developer can write an application that doesn’t target aspecific database system. This is especially beneficial for vendors who write applications to be consumed by multiple customers. It gives customers the capability to choose the back-end database system they want to use, without requiring vendors to create several versions of their applications.


Limitations of ODBC




  1. First, it is only capable of supporting relational data. If you need to
    access a hierarchical data source such as LDAP, or semi-structured data, ODBC can’t help you.

  2. Second, it can only handle SQL statements, and the result must be representable in the form of rows and columns.Overall, ODBC was a huge success, considering what the previous environment was like.


ADO.NET


With the release of the .NET Framework, Microsoft introduced a new data access model, called ADO.NET.The ActiveX Data Object acronym was no longer relevant, as ADO.NET was not ActiveX, but Microsoft kept the acronym due to the huge success of ADO. In reality, it’s an entirely new data access model written in the .NET Framework.ADO.NET supports communication to data sources through both ODBC and OLE-DB, but it also offers another option of using database-specific data providers. These data providers offer greater performance by being able to take advantage of data-source-specific optimizations. By using custom code for the data source instead of the generic ODBC and OLE-DB code, some of the overhead is also avoided. The original release of ADO.NET included a SQL provider and an OLE-DB provider, with the ODBC and Oracle providers being introduced later. Many vendors have also written providers for their databases since.Figure below shows the connection options available with ADO.NET.






With ADO.NET, the days of the recordset and cursor are gone. The model is entirely new, and consists of
five basic objects:


  • Connection—The Connection object is responsible for establishing and maintaining the connection to the data source, along with any connection-specific information.



  • Command—The Command object stores the query that is to be sent to the data source, and any applicable parameters.



  • DataReader—The DataReader object provides fast, forward-only reading capability to quickly loop through the records.



  • DataSet—The DataSet object, along with its child objects, is what really makes ADO.NET unique. It provides a storage mechanism for disconnected data. The DataSet never communicates with any data source and is totally unaware of the source of the data used to populate it. The best way to think of it is as an in-memory repository to store data that has been retrieved.



  • DataAdapter—The DataAdapter object is what bridges the gap between the DataSet and thedata source. The DataAdapter is responsible for retrieving the data from the Command object and populating the DataSet with the data returned. The DataAdapter is also responsible for persisting changes to the DataSet back to the data source.





Advantages



  • ADO.NET made several huge leaps forward. Arguably, the greatest was the introduction of truly disconnected data access. Maintaining a connection to a database server such as MS SQL Server is an expensive operation. The server allocates resources to each connection, so it’s important to limit the number of simultaneous connections. By disconnecting from the server as soon as the data is retrieved, instead of when the code is done working with that data, that connection becomes available for another process,making the application much more scalable.



  • Another feature of ADO.NET that greatly improved performance was the introduction of connection pooling. Not only is maintaining a connection to the database an expensive operation, but creating and destroying that connection is also very expensive. Connection pooling cuts down on this. When a connection is destroyed in code, the Framework keeps it open in a pool. When the next process comes around that needs a connection with the same credentials, it retrieves it from the pool, instead of creating a new one.



  • Several other advantages are made possible by the DataSet object. The DataSet object stores the data as XML, which makes it easy to filter and sort the data in memory. It also makes it easy to convert the data to other formats, as well as easily persist it to another data store and restore it again.


ActiveX Data Objects (ADO)

Microsoft introduced ActiveX Data Objects (ADO) primarily to provide a higher-level API for working with OLE-DB. With this release, Microsoft took many of the lessons from the past to build a lighter,more efficient, and more universal data access API. Unlike RDO, ADO was initially promoted as a replacement for both DAO and RDO. At the time of its release, it (along with OLE-DB) was widely believed to be a universal solution for accessing any type of data—from databases to e-mail, flat text files, and spreadsheets.



ActiveX Data Objects (ADO)



ADO represented a major shift from previous methods of data access. With DAO and RDO, developers were expected to navigate a tree of objects in order to build and execute queries. For example, to execute a simple insert query in RDO, developers couldn’t just create an rdoQuery object and execute it. Instead,they first needed to create the rdoEngine object, then the rdoEnvironment as a child of it, then an rdoConnection, and finally the rdoQuery. It was a very similar situation with DAO. With ADO,however, this sequence was much simpler. Developers could just create a command object directly, passing in the connection information and executing it. For simplicity and best practice, most developers would still create a separate command object, but for the first time the object could stand alone.



As stated before, ADO was primarily released to complement OLE-DB; however, ADO was not limited to just communicating with OLE-DB data sources. ADO introduced the provider model, which enabled software vendors to create their own providers relatively easily, which could then be used by ADO to communicate with a given vendor’s data source and implement many of the optimizations specific to that data source. The ODBC provider that shipped with ADO was one example of this. When a developer connected to an ODBC data source, ADO would communicate through the ODBC provider instead of through OLE-DB. More direct communication to the data source resulted in better performance and an easily extensible framework. Figure above shows this relationship.

In addition to being a cleaner object model, ADO also offered a wider feature set to help lure developers
away from DAO and RDO. These included the following:

  • Batch Updating—For the first time, users enjoyed the capability to make changes to an entire recordset in memory and then persist these changes back to the database by using the UpdateBatch command.
  • Disconnected Data Access—Although this wasn’t available in the original release, subsequent releases offered the capability to work with data in a disconnected state, which greatly reduced the load placed on database servers.
  • Multiple Recordsets—ADO provided the capability to execute a query that returns multiple recordsets and work with all of them in memory. This feature wasn’t even available in ADO.NET until this release, now known as Multiple Active Result Sets (MARS).

Remote Data Objects (RDO)


Remote Data Objects (RDO) was Microsoft’s solution to the slow performance created by DAO. For talking to databases other than Microsoft Access, RDO did not use the JET engine like DAO; instead, it communicated directly with the ODBC layer. Figure below shows this relationship.



Remote Data Objects (RDO)



Removing the JET engine from the call stack greatly improved performance to ODBC data sources. The JET engine was only used when accessing a Microsoft Access Database. In addition, RDO had the capability to use client-side cursors to navigate the records, as opposed to the server-side cursor requirements of DAO.This greatly reduced the load on the database server, enabling not only the application to perform better,but also the databases on which that application was dependant.


RDO was primarily targeted toward larger, commercial customers, many of whom avoided DAO due to the performance issues. Instead of RDO replacing DAO, they largely co-existed. This resulted for several reasons:

  1. First, users who developed smaller applications, where performance wasn’t as critical, didn’t want to take the time to switch over to the new API.
  2. Second, RDO was originally only released with the Enterprise Edition of Visual Basic, so some developers didn’t have a choice.
  3. Third, with the release of ODBCDirect, a DAO add-on that routed the ODBC requests through RDO instead of the JET engine, the performance gap between the two became much smaller.
  4. Finally, it wasn’t long after the release of RDO that Microsoft’s next universal access API was released.

Data Access Objects (DAO)


With the release of Visual Basic 2.0, developers were introduced to a new method for accessing data,known as Data Access Objects (DAO). This was Microsoft’s first attempt to create a data consumer API.Although it had very humble beginnings, and when first released only supported forward-only operations against ODBC data sources, it was the beginning of a series of libraries that would lead developerscloser to the ideal of Universal Data Access. It also helped developers using higher-level languages such as Visual Basic to take advantage of the power of ODBC that developers using lower-level languagessuch as C were beginning to take for granted.



DAO


DAO was based on the JET engine, which was largely designed to help developers take advantage of the desktop database application Microsoft was about to release, Microsoft Access. It served to provide another layer of abstraction between the application and data access, making the developer’s task simpler.
Although the initial, unnamed release with Visual Basic 2.0 only supported ODBC connections, the release of Microsoft Access 1.0 marked the official release of DAO 1.0, which supported direct communication with Microsoft Access databases without using ODBC. Figure below shows this relationship.



DAO 2.0 was expanded to support OLE-DB connections and the advantages that come along with it. It also provided a much more robust set of functionality for accessing ODBC data stores through the JET engine. Later, versions 2.5 and 3.0 were released to provide support for ODBC 2.0 and the 32-bit OS introduced with Windows 95.



Drawback



The main problem with DAO is that it can only talk to the JET engine. The JET engine then communicates with ODBC to retrieve the data. Going through this extra translation layer adds unnecessary overhead and makes accessing data through DAO slow.

Type Of Application Architectures

Applications are developed to support the organisations in their business operations.Application receives input and accept it and process the data based on business rules and provides data as output.The functions performed by an application can be classified into three types

  1. User Service
  2. Business Service
  3. Data Service
  • User Service
This is regarded as the front end of the solution.This is also called presentation layer because it provides interactive user interface.
  • Business Service
It controls the enforcement of business rules of the data of an organisation. Business rules encircle those practices and activities that define the behaviour of an organisation. Eg: an organisation decides that the credit limit of all client cannot exceed $200000. Business service layer sets rules or validations to pertain these functions.This means the back end does not receive incorrect data
  • Data Service
It comprises the data and functions for manipulating this data.

These three layer form the base of models or architecture used in application development in an organisation.Applications can be

Single-Tier Architecture

Early we have discussed about the application architecture .On the basis of that we can explain what is single tier architecture?.In case of Single-tier architecture a single executable file handles all the functions relating to the user,business and data service layers.This is also called Monolithic application.

Single-Tier Architecture

Some of early COBOL programs using this architecture.


Two-Tier Architecture

In case of two-tier architecture, an application is broadly divided into two

  • Client:Implements the user interface
  • Server:Storage of data

In case of this architecture the user and data services are either on the same machine or on the different machines.In two-tier architecture,the business layers are implemented in either of the following methods:

Two-Tier Architecture

  1. Fat Client

    In this type of method business service layer is combined with user service layer.Clients execute the presentation logic and enforce business logic.server stores data and process transactions.this is used when server is loaded with transaction processing activities and is not equipped to process business logic

  2. Fat Server

    Here business service layer is combined with data service layer.As business service is stored on the server, most of the processing takes place on server.

  3. Dividing business services between the user and data services

    You can also implemented in such a way that business services are distributed between user and data services.Here processing of business logic is distributed between data and user services.



Three-Tier Architecture

In case of three-tier architecture all the three layers reside separately,either on the same machine or on different machine which is entirely different from single-tier and two-tier architecture.

Three-Tier Architecture

  • The user interface interacts with business logic
  • The business logic validates the data send by the interface and forwards it to the database if it confirms to the requirements
  • The front end interacts with business logic in turn interacts with the database

Template by - Mathew | Mux99