Learn IT

Free learning anything to everything in Information Technology.

Features Of Common Language Runtime

The .NET Framework provides a run-time environment called the common language runtime, which runs the code and provides services that make the development process easier.


The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.



With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.



The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.



The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.



In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.



The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.



While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.



The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.



Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework

The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:


  • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.

  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.

  • To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.

  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.

  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.

  • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.


Components of .NET Framework



The .NET Framework has two main components:


The common language runtime :


The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code.


The .NET Framework Class Library:


The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.



The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.



For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic.



Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage.

Creating Database Objects Using Managed Code (Microsoft .NET 2.0)

One of the neat features of SQL Server 2005 is the integration with the .NET CLR. The integration of CLR with SQL Server extends the capability of SQL Server in several important ways. This integration enables developers to create database objects such as stored procedures, user defined functions, and triggers by using modern object-oriented languages such as VB.NET and C#.

In this post, I will demonstrate how to create the stored procedures using C#. Before looking at the code, let us understand the pros and cons of using managed language in the database tier to create server side objects.

T-SQL Vs Managed Code

Although T-SQL, the existing data access and manipulation language, is well suited for set-oriented data access operations, it also has limitations. It was designed more than a decade ago and it is a procedural language rather than an object-oriented language. The integration of the .NET CLR with SQL Server enables the development of stored procedures, user-defined functions, triggers, aggregates, and user-defined types using any of the .NET languages.

This is enabled by the fact that the SQL Server engine hosts the CLR in-process. All managed code that executes in the server runs within the confines of the CLR. The managed code accesses the database using ADO.NET in conjunction with the new SQL Server Data Provider. Both Visual Basic .NET and C# are modern programming languages offering full support for arrays, structured exception handling, and collections.

Developers can leverage CLR integration to write code that has more complex logic and is more suited for computation tasks using languages such as Visual Basic .NET and C#. Managed code is better suited than Transact-SQL for number crunching and complicated execution logic, and features extensive support for many complex tasks, including string handling and regular expressions. T-SQL is a better candidate in situations where the code will mostly perform data access with little or no procedural logic.

Creating CLR Based Stored Procedures

For the purposes of this example, create a new SQL Server Project using Visual C# as the language of choice in Visual Studio 2005. Since you are creating a database project, you need to associate a data source with the project. At the time of creating the project, Visual Studio will automatically prompt you to either select an existing database reference or add a new database reference. Choose pubs as the database. Once the project is created, select Add Stored Procedure from the Project menu. In the Add New Item dialog box, enter Authors.cs and click Add button. After the class is created, modify the code in the class to look like the following.



using System;
using System.Data;
using System.Data.Sql;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public class Authors
{
[SqlProcedure]
public static void GetAuthors()
{
SqlPipe sp = SqlContext.Pipe;
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = "Select DatePart(second, GetDate()) " +
" As timestamp,* from authors";
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}

[SqlProcedure]
public static void GetTitlesByAuthor(string authorID)
{
string sql = "select T.title, T.price, T.type, " +
"T.pubdate from authors A" +
" inner join titleauthor TA on A.au_id = TA.au_id " +
" inner join titles T on TA.title_id = T.title_id " +
" where A.au_id = '" + @authorID + "'";
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlPipe sp = SqlContext.Pipe;
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = sql;
SqlParameter paramauthorID = new
SqlParameter("@authorID", SqlDbType.VarChar, 11);
paramauthorID.Direction = ParameterDirection.Input;
paramauthorID.Value = authorID;
cmd.Parameters.Add(paramauthorID);
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}
}
Let us examine the above lines of code. The above code starts by importing the required namespaces and then declares a class named Authors. There are two important classes in the Microsoft.SqlServer.Server namespace that are specific to the in-proc provider:
  • SqlContext: This class encapsulates the extensions required to execute in-process code in SQL Server 2005. In addition it provides the transaction and database connection which are part of the environment in which the routine executes.
  • SqlPipe: This class enables routines to send tabular results and messages to the client. This class is conceptually similar to the Response class found in ASP.NET in that it can be used to send messages to the callers.

The Authors class contains two static methods named GetAuthors and GetTitlesByAuthor. As the name suggests, the GetAuthors method simply returns all the authors from the authors table in the pubs database and the GetTitlesByAuthor method returns all the titles for a specific author.

Inside the GetAuthors method, you start by getting reference to the SqlPipe object by invoking the Pipe property of the SqlContext class.

SqlPipe sp = SqlContext.Pipe;

Then you open the connection to the database using the SqlConnection object. Note that the connection string passed to the constructor of the SqlConnection object is set to "context connection=true" meaning that you want to use the context of the logged on user to open the connection to the database.

using (SqlConnection conn = new SqlConnection("context connection=true"))

Here open the connection to the database using the Open() method.

conn.Open();

Then you create an instance of the SqlCommand object and set its properties appropriately.

SqlCommand cmd = new SqlCommand();cmd.CommandType = CommandType.Text;cmd.Connection = conn;cmd.CommandText = "Select DatePart(second, GetDate()) " + " As timestamp,* from authors";

Finally you execute the sql query by calling the ExecuteReader method of the SqlCommand object.

SqlDataReader rdr = cmd.ExecuteReader();

Then using the SqlPipe object, you then return tabular results and messages to the client. This is accomplished using the Send method of the SqlPipe class.
sp.Send(rdr);
The Send method provides various overloads that are useful in transmitting data through the pipe to the calling application. Various overloads of the Send method are:

  • Send (ISqlDataReader) - Sends the tabular results in the form of a SqlDataReader object.
  • Send (ISqlDataRecord) - Sends the results in the form of a SqlDataRecord object.
  • Send (ISqlError) - Sends error information in the form of a SqlError object.
  • Send (String) - Sends messages in the form of a string value to the calling application.

Both the methods in the Authors class utilize one of the Send methods that allows you to send tabular results to the client application in the form of a SqlDataReader object. Since the GetTitlesByAuthor method implementation is very similar to the GetAuthors method, I will not be discussing that method in detail.

Now that the stored procedures are created, deploying it is very simple and straightforward. Before deploying it, you need to build the project first. To build the project, select Build->Build from the menu. This will compile all the classes in the project and if there are any compilation errors, they will be displayed in the Error List pane. Once the project is built, you can then deploy it onto the SQL Server by selecting Build->Deploy from the menu. This will not only register the assembly in the SQL Server but also deploy the stored procedures in the SQL Server. Once the stored procedures are deployed to the SQL Server, they can then be invoked from the data access layer, which is the topic of focus in the next section.

Before executing the stored procedure, ensure you execute the following sql script using SQL Server Management Studio to enable managed code execution in the SQL Server.

EXEC sp_configure 'clr enabled', 1;

RECONFIGURE WITH OVERRIDE;

GO

Designing N-Tier Client/Server Architecture

Introduction

Designing N-Tier client/server architecture is no less complex than developing two-tier architecture, however the N-Tier architecture, produces a far more flexible and scalable client/server environment. In two-tier architecture, the client and the server are the only layers. In this model, both the presentation layer and the middle layer are handled by the client. N-Tier architecture has a presentation layer and three separate layers - a business logic layer and a data access logic layer and a database layer. The next section discusses each of these layers in detail.

Different Layers of an N-Tier Application

In a typical N-Tier environment, the client implements the presentation logic (thin client). The business logic and data access logic are implemented on an application server(s) and the data resides on database server(s). N-tier architecture is typically thus defined by the following layers:
  • Presentation Layer: This is a front-end component, which is responsible for providing portable presentation logic. Since the client is freed of application layer tasks, which eliminates the need for powerful client technology. The presentation logic layer consists of standard ASP.NET web forms, ASP pages, documents, and Windows Forms, etc. This layer works with the results/output of the business logic layer and transforms the results into something usable and readable by the end user.

  • Business Logic Layer: Allows users to share and control business logic by isolating it from the other layers of the application. The business layer functions between the presentation layer and data access logic layers, sending the client's data requests to the database layer through the data access layer.

  • Data Access Logic Layer: Provides access to the database by executing a set of SQL statements or stored procedures. This is where you will write generic methods to interface with your data. For example, you will write a method for creating and opening a SqlConnection object, create a SqlCommand object for executing a stored procedure, etc. As the name suggests, the data access logic layer contains no business rules or data manipulation/transformation logic. It is merely a reusable interface to the database.

  • Database Layer: Made up of a RDBMS database component such as SQL Server that provides the mechanism to store and retrieve data.

Steps to Implement ClickOnce Deployment in .NET 2.0

  • You create a Windows forms application and use the Publish option to deploy the application onto any of the following locations: File System, Local Web Server, FTP Site, or a Remote Web Site.

  • Once the application is deployed onto the target location, the users of the application can browse to the publish.htm file and install the application onto their machine. Note that publish.htm file is the entry point for installing the application and this will be discussed in the later part of this article.

  • Once the user has installed the application, a shortcut icon will be added to the desktop and the application will also be listed in the Control Panel/Add Remove Programs.

  • When the user launches the application again, the manifest will contain all the information to decide if the application should go to the source location and check for updates to the original application. Let us say, for instance, a newer version of the application is available, it will be automatically downloaded and made available to the user. Note that when the new version is downloaded, it is performed in a transacted manner meaning that either the entire update is downloaded or nothing is downloaded. This will ensure that the application integrity is preserved.

ClickOnce Deployment In .NET Framework 2.0

It is very common among the developers of previous generations to choose web applications over rich Windows UIs because of the deployment challenges associated with deploying a Smart Client Windows Forms application. However with the release of Visual Studio 2005, Microsoft has released a new technology named ClickOnce that is designed to solve the deployment issues for a windows forms application. This new technology not only provides an easy application installation mechanism but also enables easy deployment of upgrades to existing applications.

Since the introduction of the powerful server side web technologies such as ASP, JSP, ASP.NET, developers have shown more interest in building web applications rather than in windows applications. The factors that attracted the developers toward web applications can be summarized as follows:


  • A web application is ubiquitous, making it accessible in all the places where an internet connection is available.

  • The second and most important factor is the deployment. With web applications, there is no need to deploy any software on the client side. All the client application needs is just the browser. This makes it possible for the developers to easily deploy updates to the existing web application without impacting the client machines.

If you talk to the developers, you will find that the main reason for preference for web applications over windows applications is the second point in the above list. Even though this is true with traditional applications, Microsoft is making every attempt to ensuring that windows applications can be deployed and updated with the same ease as the web applications.

You can see proofs of this in the initial release of .NET Framework when Microsoft introduced the deployment of windows forms application through HTTP. Using this approach, you could simply use HREF HTML element to point to a managed executable (.exe). Then when you click on the HREF link, Internet Explorer can automatically download and install the executable on the client machine. Even though this approach sounds very promising, it also presents some interesting challenges.

One of the most important challenges is the downloading of the updated code through the HTTP. Since this process was not transacted, it was possible for the application to be left in an inconsistent state.

Moreover there was no way for you to specify if the application could work in offline mode apart from the traditional online mode. Combined with the operational mode issue, this approach also did not provide the ability to create shortcuts that can be used to launch the application. Even though this approach presented itself with a lot of issues, it could still be used in controlled environments.

However for complex multi-assembly dependant windows forms applications, you needed a transacted and easily updateable way of deployment. This is exactly what the ClickOnce technology introduced with .NET Framework 2.0 provides.

2D Graphics Techniques

2D graphics models may combine geometric models (also called vector graphics), digital images (also called raster graphics), text to be typeset (defined by content, font style and size, color, position, and orientation), mathematical functions and equations, and more. These components can be modified and manipulated by two-dimensional geometric transformations such as translation, rotation, scaling.

In object-oriented graphics, the image is described indirectly by an object endowed with a self-rendering method—a procedure which assigns colors to the image pixels by an arbitrary algorithm. Complex models can be built by combining simpler objects, in the paradigms of object-oriented programming.


Direct painting

A convenient way to create a complex image is to start with a blank "canvas" raster map (an array of pixels, also known as a bitmap) filled with some uniform background color and then "draw", "paint" or "paste" simple patches of color onto it, in an appropriate order. In particular, the canvas may be the frame buffer for a computer display.
Some programs will set the pixel colors directly, but most will rely on some 2D graphics library and/or the machine's graphics card, which usually implement the following operations

  • paste a given image at a specified offset onto the canvas;

  • write a string of characters with a specified font, at a given position and angle;

  • paint a simple geometric shape, such as a triangle defined by three corners or ,a circle with given center and radius;

  • draw a line segment, arc, or simple curve with a virtual pen of given width.


Extended color models

Text, shapes and lines are rendered with a client-specified color. Many libraries and cards provide color gradients, which are handy for the generation of smoothly-varying backgrounds, shadow effects, etc.. (See also Gouraud shading). The pixel colors can also be taken from a texture, e.g. a digital image (thus emulating rub-on screentones and the fabled "checker paint" which used to be available only in cartoons).

Painting a pixel with a given color usually replaces its previous color. However, many systems support painting with transparent and translucent colors, which only modify the previous pixel values. The two colors may also be combined in fancier ways, e.g. by computing their bitwise exclusive or. This technique is known as inverting color or color inversion, and is often used in graphical user interfaces for highlighting, rubber-band drawing, and other volatile painting—since re-painting the same shapes with the same color will restore the original pixel values.

Layers

The models used in 2D computer graphics usually do not provide for three-dimensional shapes, or three-dimensional optical phenomena such as lighting, shadows, reflection, refraction, etc.. However, they usually can model multiple layers (conceptually of ink, paper, or film; opaque, translucent, or transparent—stacked in a specific order. The ordering is usually defined by a single number (the layer's depth, or distance from the viewer).

Layered models are sometimes called 2 1/2-D computer graphics. They make it possible to mimic traditional drafting and printing techniques based on film and paper, such as cutting and pasting; and allow the user to edit any layer without affecting the others. For these reasons, they are used in most graphics editors. Layered models also allow better anti-aliasing of complex drawings and provide a sound model for certain techniques such as mitered joints and the even-odd rule.

Layered models are also used to allow the user to suppress unwanted information when viewing or printing a document, e.g. roads and/or railways from a map, certain process layers from an integrated circuit diagram, or hand annotations from a business letter.

In a layer-based model, the target image is produced by "painting" or "pasting" each layer, in order of decreasing depth, on the virtual canvas. Conceptually, each layer is first rendered on its own, yielding a digital image with the desired resolution which is then painted over the canvas, pixel by pixel. Fully transparent parts of a layer need not be rendered, of course. The rendering and painting may be done in parallel, i.e. each layer pixel may be painted on the canvas as soon as it is produced by the rendering procedure.

Layers that consist of complex geometric objects (such as text or polylines) may be broken down into simpler elements (characters or line segments, respectively), which are then painted as separate layers, in some order. However, this solution may create undesirable aliasing artifacts wherever two elements overlap the same pixel.

2D Computer Graphics

2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. The word may stand for the branch of computer science that comprises such techniques, or for the models themselves.


Raster graphic sprites and masks 2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc.. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics (whose approach is more akin to photography than to typography).


In many domains, such as desktop publishing, engineering, and business, a description of a document based on 2D computer graphics techniques can be much smaller than the corresponding digital image—often by a factor of 1/1000 or more. This representation is also more flexible since it can be rendered at different resolutions to suit different output devices. For these reasons, documents and illustrations are often stored or transmitted as 2D graphic files.


2D computer graphics started in the 1950s, based on vector graphics devices. These were largely supplanted by raster-based devices in the following decades. The PostScript language and the X Window System protocol were landmark developments in the field.

Subfields Of Computer Graphics

Geometry

Geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on the exterior of the object, boundary representations are most common in computer graphics. Two dimensional surfaces are a good analogy for the objects most often used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. These representations are Lagrangian, meaning the spatial locations of the samples are independent. In recent years, however, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).

Subfields Of Geometry

  • Constructive solid geometry - Process by which complicated objects are modelled with implicit geometric objects and boolean operations.

  • Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.

  • Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.

  • Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.

  • Subdivision surfaces Out-of-core mesh processing - another recent field which focuses on mesh datasets that do not fit in main memory.

Animation

Animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically most interest in this area has been focused on parametric and data-driven models, but in recent years physical simulation has experienced a renaissance due to the growing computational capacity of modern machines.

Subfields Of Animation

  • Performance capture
  • Character animation
  • Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)

Rendering

Rendering converts a model into an image either by simulating light transport to get physically-based photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are:
  • Transport (how much light gets from one place to another) and
  • Scattering (how surfaces interact with light).
Transport

Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

Scattering

Models of scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering.
Shading can be broken down into two orthogonal issues, which are often studied independently:
  • Scattering : How light interacts with the surface at a given point.
  • Shading : How material properties vary across the surface.

The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

Other subfields

  • Physically-based rendering - concerned with generating images according to the laws of geometric optics.
  • Real time rendering - focuses on rendering for interactive applications, typically using specialized hardware like GPUs.
  • Non-photorealistic rendering
  • Relighting - recent area concerned with quickly re-rendering scenes.

Computer Graphics

Computer graphics is a sub-field of computer science and is concerned with digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and
image processing.


Definition

Computer graphics broadly studies the manipulation of visual and geometric information using computational techniques. Computer graphics as an academic discipline focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues.

Major subfields in computer graphics might be:
  1. Geometry: studies ways to represent and process surfaces.
  2. Animation: studies with ways to represent and manipulate motion.
  3. Rendering: studies algorithms to reproduce light transport.
  4. Imaging: studies image acquisition or image editing.

Types Of Operating System

Generally,there are four Types of Operating System:

Real-time Operating System:

A real-time operating system (RTOS) is an operating system that guarantees a certain capability within a specified time constraint. For example, an operating system might be designed to ensure that a certain object was available for a robot on an assembly line. In what is usually called a "hard" real-time operating system, if the calculation could not be performed for making the object available at the designated time, the operating system would terminate with a failure. In a "soft" real-time operating system, the assembly line would continue to function but the production output might be lower as objects failed to appear at their designated time, causing the robot to be temporarily unproductive.

In general, real-time operating systems are said to require:

  • Multitasking
  • Process threads that can be prioritized.
  • A sufficient number of interrupt levels.

Real-time operating systems are often required in small embedded operating systems that are packaged as part of microdevices. Some kernels can be considered to meet the requirements of a real-time operating system. However, since other components, such as device drivers, are also usually needed for a particular solution, a real-time operating system is usually larger than just the kernel.

Single-user, single-tasking operating system:

As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm O.S. for Palm handheld computers is a good example of a modern single-user, single-task operating system.

Single-user, multi-tasking operating system:

This is the type of operating system most people use on there desktop and laptop computers today. Windows 98 and the Mac O.S. are both examples of an operating system that will let a single user has several programs in operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail message.

Multi-user operating systems:

A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS, and mainframe operating systems, such as MVS, are examples of multi-user operating systems. It's important to differentiate here between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves aren't true multi-user operating systems. The system administrator is the only user for Windows 2000 or Netware. The network support and the entire remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.

Functions of Operating System

In any computer, The Operating System will perform the following funtions:

  • Controls the backing store and peripherals such as disk drives and printers.

  • Controls the loading and running of programs.

  • Organises the use of memory between programs.

  • Organises processing time between programs and users.

  • Organises priorities between program and users.

  • Maintains security and access rights of users.

  • Deals with errors and user instructions.

On a personal computer the operating system will:

  • Deal with the transfer of programs in and out of memory.

  • Allow the user to save files to a backing store.

  • Control the transfer of data to peripherals such as printers.

  • Provide the interface between user and computer - for example, Windows XP and OSX.

In a larger computer such as a main frame the operating system works on the same principles.

Technical Approach for Migrating VB 6.0 Application to VB .NET

If you upgrade a Visual Basic 6.0 project group or an n-tier application to Visual Basic .NET, you must upgrade one project or tier at a time.

If your three-tier application includes a client component, a business component, and a data access component, you should upgrade the application in the following order:
  1. Client component, Business component, Data access component
  2. Business component, Data access component
  3. Data access component

In an n-tier application, always upgrade the client tier first, and then upgrade other tiers on the dependency tree. You should follow this order for two reasons:

  • This approach allows you to keep the application working. When you upgrade the client, you break and work with only one component of the application. All of the other components continue to work the same way that they did previously. With this approach, you isolate the work area. Alternately, if you upgrade the data tier first, suddenly you break the data tier and the components that depend on the data tier.

  • Visual Basic 6.0 locks type libraries (TypeLibs). This creates a problem if you need to rebuild the TypeLib or recompile the underlying dynamic-link library (DLL). If you upgrade the business tier first and then upgrade the client, you must continually stop and restart Visual Basic 6.0 every time you change the middle tier. Consider the following workflow:
  1. Upgrade the middle tier. Change the Visual Basic 6.0 client to access the middle tier. Run the middle tier.
  2. Change the Visual Basic 6.0 client to access the middle tier. Run the middle tier.
  3. Run the middle tier.

If you want to change the .NET DLL, you must then close Visual Basic 6.0, recompile in .NET, restart Visual Basic 6.0, and so on. You can avoid this problem if you upgrade the client first and then upgrade the middle tier.

To upgrade each Visual Basic 6.0 application, use the Upgrade tool that is included with Visual Basic .NET. The Upgrade tool is started when you use Visual Basic .NET to open a Visual Basic 6.0 project. When you use the Upgrade tool, the Visual Basic 6.0 project is not changed, and a new Visual Basic .NET project is created. Before you upgrade a Visual Basic 6.0 project, it is best to prepare it for upgrade.

Migration Strategy for Upgrading VB 6.0 Application to VB .NET

When we start developing any new software solution, certain steps are taken. We begin with a plan, identify processes, gather requirements, and eventually build the architecture of the solution. Once things start taking shape, we start with development. Why do we choose this path? We all know that the path for doing the analysis and design up front has been proven to save a lot of time and cost for software development. In order to migrate projects from any prior version of Visual Basic, the path for analysis and design up front yields the best results.

The analysis part is slightly different in this case. We begin by studying the current application, and try to identify code blocks that require changes. In order to migrate your VB applications, it is not recommended that you directly convert your existing applications to .NET and fix the converted code in .NET. It is always better to take the existing application to the "Migration Ready" stage.

Here are the steps for migrating applications from VB 6.0 to VB .NET:

  1. Evaluate the project and create a migration strategy.
  2. Make the changes in VB 6.0 project and create a "Migration Strategy."
  3. Migrate using the Visual Basic .NET Migration tool.
  4. If the changes are not at par, make more changes and use the Migration tool (repeat Steps 2 and 3 as necessary).
  5. Get developers at speed and make changes in .NET.
  6. Build the .NET solution.

Migrating Applications from VB 6.0 to VB .NET

Introduction

Microsoft Visual Basic has had many evolutions since its original release, Visual Basic 1.0. The release of Visual Basic .NET is the biggest evolution yet. The language has been redesigned to take advantage of the .NET Framework. By leveraging the features that the .NET Framework provides, Visual Basic supports language features such as code inheritance, visual forms inheritance, and multi-threading. The object model is more extensive than earlier versions, and Visual Basic .NET totally integrates with the .NET Framework. Therefore, interaction between components written in other .NET languages is very efficient.

Benefits Reaped:

  • These new features open new doors for the Visual Basic developer: With Web Forms and ADO .NET, you now can rapidly develop scalable Web sites; with inheritance, the language now truly supports object-oriented programming; Windows Forms natively supports accessibility and visual inheritance; and deploying your applications is now as simple as copying your executables and components from directory to directory.

  • Visual Basic .NET is now fully integrated with the other Microsoft Visual Studio .NET languages. Not only can you develop application components in different programming languages, your classes also can now inherit from classes written in other languages using cross-language inheritance. With the unified debugger, you can now debug multiple language applications, irrespective of whether they are running locally or on remote computers. Finally, whatever language you use, the Microsoft .NET Framework provides a rich set of APIs for Microsoft Windows® and the Internet.

  • There were two options to consider when designing Visual Basic .NET—retrofit the existing code base to run on top of the .NET Framework, or build from the ground up, taking full advantage of the platform. To deliver the features most requested by customers (for example, inheritance and threading), to provide full and uninhibited access to the platform, and to ensure that Visual Basic moves forward into the next generation of Web applications, the right decision was to build from the ground up on the new platform. For example, many of the new features found in Windows Forms could have been added to the existing code base as new controls or more properties. However, this would have been at the cost of all the other great features inherent to Windows Forms, such as security and visual inheritance.

  • One of Microsoft's major goals was to ensure Visual Basic code could fully interoperate with code written in other languages, such as Microsoft Visual C# or Microsoft Visual C++, and enable the Visual Basic developer to harness the power of the .NET Framework simply, without resorting to the programming workarounds traditionally required to make Windows APIs work. Visual Basic now has the same variable types, arrays, user-defined types, classes, and interfaces as Visual C++ and any other language that targets the Common Language Runtime; however, we had to remove some features, such as fixed-length strings and non-zero based arrays from the language.

  • Visual Basic is now a true object-oriented language; some unintuitive and inconsistent features such as GoSub/Return and DefInt have been removed from the language.

  • The result is a re-energized Visual Basic, which will continue to be the most productive tool for creating Windows-based applications, and is now positioned to be the best tool for creating the next-generation Web sites.

Different Types Of Operating Systems

A basic list of the different types of operating systems


GUI

Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly navigated by using a computer mouse

Some examples of GUI Operating Systems

  • System 7.x
  • Windows 98
  • Windows CE

Multi-user

A multi-user Operating System allows for multiple users to use the same computer at the same time and/or different times.

Some examples of multi-user Operating Systems

  • Linux
  • Unix
  • Windows 2000

Multiprocessing

An Operating System capable of supporting and utilizing more than one computer processor.
Some examples of multiprocessing Operating Systems
  • Linux
  • Unix
  • Windows 2000

Multitasking

An Operating system that is capable of allowing multiple software processes to run at the same time.
Some examples of multitasking Operating Systems.
  • Unix
  • Windows 2000

Multithreading

Operating systems that allow different parts of a software program to run concurrently. Operating systems that would fall into this category are:
  • Linux
  • Unix
  • Windows 2000

Graphical User Interface

Most modern computer systems contain Graphical User Interfaces. In some computer systems the GUI is integrated into the kernel—for example, in the original implementations of Microsoft Windows and Mac OS, the graphical subsystem was actually part of the kernel. Other operating systems, some older ones and some modern ones, are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980's UNIX, VMS and many others had operating systems that were built this way. Today Linux, and Mac OS X are also built this way.


Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly found setup on most Unix and Unix-like (BSD, Linux, Minix) systems. Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to open source-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).


Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 2001.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Networking

Current operating systems generally support a variety of networking protocols. Most are capable of using the TCP/IP networking protocols. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections.


Many operating systems also support one or more vendor-specific legacy networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Device driver

A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.


The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, OSes essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these OS mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating systems' point of view for any person.

Please follow these links for details on other tasks performed by Operating System:

Memory Management

Process Management

Disk and File Management

Networking

Security

Graphical User Interface

Device Driver Management

Security

Many operating systems include some level of security. Security is based on the two ideas that:


The operating system provides access to a number of resources, directly or indirectly, such as files on a local disk, privileged system calls, personal information about users, and the services offered by the programs running on the system;


The operating system is capable of distinguishing between some requesters of these resources who are authorized (allowed) to access the resource, and others who are not authorized (forbidden). While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. Requesters, in turn, divide into two categories:


Internal security: an already running program. On some systems, once a program is running it has no limitations, but commonly the program has an identity which it keeps and is used to check all of its requests for resources.


External security: a new request from outside the computer, such as a login at a connected console or some kind of network connection. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.


In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").


Internal security
Internal security can be thought of as protecting the computer's resources from the programs concurrently running on the system. Most operating systems set programs running natively on the computer's processor, so the problem arises of how to stop these programs doing the same task and having the same privileges as the operating system (which is after all just a program too). Processors used for general purpose operating systems generally have a hardware concept of privilege. Generally less privileged programs are automatically blocked from using certain hardware instructions, such as those to read or write from external devices like disks. Instead, they have to ask the privileged program (operating system kernel) to read or write. The operating system therefore gets the chance to check the program's identity and allow or refuse the request.

External security
Typically an operating system offers (or hosts) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the operating system's network address. Services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security.

At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.


An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.


Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Disk and file system management

Generally, operating systems include support for file systems, which allow the user to segment a given area of memory (sometimes RAM, but usually a disk) into individual files.


Modern file systems comprise a hierarchy of directories. While the idea is conceptually similar across all general-purpose file systems, some differences in implementation exist. Two noticeable examples of this are the character used to separate directories, and case sensitivity.
Unix demarcates its path components with a slash (/), a convention followed by operating systems that emulated it or at least its concept of hierarchical directories, such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but had already also adopted the CP/M convention of using slashes for additional options to commands, so instead used the backslash (\) as its component separator. Microsoft Windows continues with this convention; Japanese editions of Windows use ¥, and Korean editions use ₩.[1] Prior to Mac OS X, versions of Mac OS use a colon (:) for a path separator. RISC OS uses a period (.).

Unix and Unix-like operating systems allow for any character in file names other than the slash and NUL characters (including line feed (LF) and other control characters). Unix file names are case sensitive, which allows multiple files to be created with names that differ only in case. By contrast, Microsoft Windows file names are not case sensitive by default. Windows also has a larger set of punctuation characters that are not allowed in file names.


File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.


Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS, along with the FAT file systems, and NTFS.


Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The NTFS file system is the most efficient and reliable of the four Windows file systems, and as of Windows Vista, is the only file system which the operating system can be installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for flash drives.


Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT16, FAT32, NTFS, UDF, and other file systems, but cannot be installed to them.


Common to all these (and other) operating systems is support for file systems typically found on removable media. FAT12 is the file system most commonly found on floppy discs. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.

Process Management

A program running on a computer, whether visible to the user or not, is commonly referred to as a process. Process management refers to the facilities provided by the OS to support the creation, execution, and destruction of processes.

Creating a process involves allocating memory space for the process, loading the program's executable code into memory, telling the scheduler to run the program, and other tasks specific to the operating system.


The scheduler is the portion of the operating system that causes the program to be executed on the CPU, that is, 'scheduled' for execution. If the scheduler supports preemptive multitasking, it can change the program currently executing on the CPU to that of another program when it determines that the first program has executed for a predetermined amount of time. The amount of time allocated to a given process may depend on the needs of the process in question and the user's priority level for that process.


Destroying a process involves releasing any resources (including dynamically allocated memory, file references, and I/O ports) held by the program and ensuring that a different program is scheduled for execution.


Depending on the operating system, process management can be more simple or more complex than treated above. Several examples will illustrate:


The operating systems originally deployed on mainframes, and, much later, the original microcomputer operating systems, only supported one program at a time, requiring only a very basic scheduler. Each program was in complete control of the machine while it was running.


Multitasking (timesharing) first came to mainframes in the 1960's and to microcomputers in the mid-1980's, although, in both cases, for the most part, it wasn't until years later that the capability was perfected and made widely available.


Classic Mac OS generally supported only cooperative multitasking, Application programs running with classic Mac OS must yield CPU time to the scheduler by calling a special function for that purpose.


Classic AmigaOS did not properly track resources allocated by processes at runtime. If a process had to be terminated, the resources would be lost to programs run in the future, until the machine was restarted.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Memory Management On Operating Systems

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.


Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it. Virtual memory does more than just make your computer's memory go further.Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

Garbage collection is the automated allocation, and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources The principal goals of the operating system's memory management are:
to provide memory space to enable several processes to be executed at the same time to provide a satisfactory level of performance for the system users to protect each programs resources to share (if desired) memory space between processes to make the addressing of memory space as transparent as possible for the programmer.


Memory management systems on multi-tasking operating systems usually deal with the following issues.


Relocation


In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references in the code of the program so that they always point to the right location in memory.


Protection


Memory protectionProcesses should not be able to reference the memory for another process without permission. This is called memory protection, and prevents malicious or malfunctioning code in one program from interfering with the operation of other running programs.


Sharing


Shared memoryEven though the memory for different processes is protected from each other different processes should be able to share information and therefore access the same part of memory.


Logical Organization


Programs are often organized in modules. Some of these modules could be shared between different programs, some are read only and some contain data that can be modified. The memory management is responsible for handling this logical organization that is different from the physical linear address space. One way to arrange this organization is segmentation.

Physical Organization


Memory is usually divided into fast primary storage and slow secondary storage. Memory management in the operating system handles moving information between these two levels of memory.

Please follow these links for details on other tasks performed by Operating System:

Memory Management

Process Management

Disk and File Management

Networking

Security

Graphical User Interface

Device Driver Management

Maximum Message Size For Web Services (.NET 3.5)

A new introduction to .NET 3.5 is the ability to limit the size of the incoming messages when using Web services. Apparently this is to help combat Denial of Service (DoS) attacks.

However, it is not clear how to change this setting, its simple when you know how. In you App.Config, or Web.Config you should have a Bindings section for each of web services references. Within this there are all sorts of useful settings, however by default the maximum message size is quite small, so to alter this you must change maxBufferSize and maxRecievedMessageSize. Now don't go crazy just up it to what you may need, this may be quite large if you are building all your internal applications through a web service layer.

C# Coalesce

Although this has been around for a long time and this is slightly off topic, I needed it this week, and just think it is worth mentioning. With objects you occasionally need to know if they are null, and if they are get something else, or do something else. This used to be very convoluted with .NET 1.1:

if (a != null)
{
return a;
}
else if (b != null)
{
return b;
}
else if (c!= null)
{
return c;
}
else
{
return new object();
}


Now you can simply use this (.NET 2.0 and above):

return a ?? b ?? c ?? new object();

Now you can not use this with types that get default values, such as Integer's, or boolean's, however still very usefull.

Programmatically retrieving Site Usage in MOSS 2007

Site Usage reports can be retrieved programmatically by using the GetUsageData method from the SPWeb class. This method would return a Data Table that contains information about the usage of the site based on the specified type of report, interval, number of columns and the last day to display. The GetUsageData method can be obtained from the Microsoft.SharePoint Namespace.

Alternatively site usage reports can also be viewed from the site settings menu. However this option is available when it is enabled from the Central Administration's Usage Analysis Logging. The log file is located by default in the 12 hive of SharePoint in the LOGS folder. However there is an option available in central administration to change the logging path.

Business Data Catalog Overview

The Business Data Catalog feature of Microsoft Office SharePoint Server 2007 provides an easy way to integrate business data from back-end server applications, such as SAP or Siebel, with your corporate portal to provide rich solutions for end users without writing any code. You register business data exposed in databases or through Web services in the Business Data Catalog by creating metadata that describes the database or Web service. The Business Data Catalog then uses this metadata to make the right calls into the data source to retrieve the relevant data.

After you register a data source in the Business Data Catalog, the business data entities are available for use by any of the following business data features:

  • Business Data Web Parts Generic Web Parts that display any entity from the Business Data Catalog, without deploying new code. The Web Parts provide customization, Web Part connections, and the standard Microsoft Windows SharePoint Services look-and-feel capabilities (paging, filtering, and style).

  • Business Data in Lists New field type that allows you to add any entity defined in the Business Data Catalog to a SharePoint list or document library.

  • Business Data Actions Business Data Actions bridge the gap between Office SharePoint Server 2007 and a native application user interface by providing a link back to the back-end data source. You can use Business Data Actions to build applications with write-back scenarios, such as a Customer Profile view that allows a user to update profile information directly in a back-end server application, such as SAP or Siebel. Actions are implemented as links, so you can also use actions to perform simple actions such as send an e-mail message or open a customer’s home page.

  • Business Data Search Offers full-text search of the data sources registered in the Business Data Catalog. You can create new search result types based on the specific data entities registered in the Business Data Catalog.

  • Business Data in User Profiles You can augment Office SharePoint Server 2007 user profiles from any external data source registered in the Business Data Catalog.

Excel Services - Architecture

Excel Services is built on the SharePoint products and technologies platform. There are three core components of Excel Services:
  1. Excel Calculation Service
  2. Excel Web Access
  3. Excel Web Service

Here is what each of these components do.

  • Excel Web Access – This is a web-part in SharePoint that performs the “rendering” (development team speak for “creating the HTML”) of Excel Workbooks on a web page. This is perhaps the most visible component for the end user. For those of you familiar with SharePoint, you can use it like any other web part in SharePoint to create a wide range of web pages.

  • Excel Web Services – This component provides the programmatic access that I talked about yesterday. It is a web service hosted in SharePoint. You can use methods in this web service to develop applications that incorporate calculations done by Excel Services and to automate the update of Excel spreadsheets.

  • Excel Calculation Service – This is the component that loads the spreadsheets, calculates them, refreshes external data, and maintains session state for interactivity. This is the heart of Excel Services.

Additionally, there is also a proxy that is used internally to handle the communication between the components on the web front end and the application server in multiple-server configurations. It also handles the load balancing in case there are multiple application servers in your installation.

These three components are divided in two major groups – those that live on a front-end server (which we refer to as a “web front end”), and those that live on a back-end application server. In the simplest of the configurations, all these components could be running on the same machine (we call this a “single box” installation). In a typical production environment with significant number of users, the components on the web front end and the application server would be on different machines. It is possible to scale (up or out) these components independently.

Security

Excel Services leverages the security infrastructure provided by SharePoint. Excel services uses SharePoint for authentication (who can log into the server) as well as authorization (who has access to which spreadsheet and the type of access; read, write, view only etc.). This provides a robust security environment for protecting your spreadsheets.

Performance and Scalability

Excel Services are optimized for scenarios in which multiple users access the same spreadsheets. We have done a lot of work to optimize for this scenario – for example, caching at multiple levels so that collective performance for a group of users is improved by caching spreadsheets as well as external data queried by the spreadsheets. All this is transparent to the end user except for the good response time. (Anticipating a question, we only share cached results between users that have the same rights.)

Excel Services can be scaled up by adding additional CPUs or memory to the server it runs on. It will take full advantage of multiple CPUs to handle multiple requests concurrently. It also supports 64bit CPUs. And it is possible to scale out the web front end and application server components independently, so you can adjust either based on server load and performance requirements. For example, if there is a bottleneck in rendering spreadsheets with Excel Web Access, then you can add more web front ends, and if there is a bottleneck is in calculations, then you can add more application servers to the farm. A lot will depend on the type, size of the workbooks and external data connections in the workbooks you are planning to use with Excel Services. For large deployments, some planning will need to go into the number of users as well as the anticipated workbook mix for the installation. The architecture is designed to meet the needs of a spectrum of deployments from a departmental to enterprise. The multi-tiered approach also allows for better security and isolation of services, for example in extranet scenarios.

Technical Architecture

Technical architecture is a part of software architecture, which focuses on how to deal with certain aspects of the software engineering process. It allows us to design better systems by:
  • Meeting system requirements and objectives: Both functional and non-functional requirements can be prioritized as "must have", "should have" or "want", where "must have" identifies properties that the system must have in order to be acceptable. An architecture allows us to evaluate and make tradeoffs among requirements of differing priority. Though system qualities (also known as non-functional requirements) can be compromised later in the development process, many will not be met if not explicitly taken into account at the architectural level.

  • Enabling flexible partitioning of the system: A good architecture enables flexible distribution of the system by allowing the system and its constituent applications to be partitioned among processors in many different ways without having to redesign the distributable component parts. This requires careful attention to the distribution potential of components early in the architectural design process.

  • Reducing cost of maintenance and evolution: Architecture can help minimize the costs of maintaining and evolving a given system over its entire lifetime by anticipating the main kinds of changes that will occur in the system, ensuring that the system's overall design will facilitate such changes, and localizing as far as possible the effects of such changes on design documents, code, and other system work products. This can be achieved by the minimization and control of subsystem interdependencies.

  • Increasing reuse and integration with legacy and third party software: An architecture may be designed to enable and facilitate the (re)use of certain existing components, frameworks, class libraries, legacy or third-party applications, etc..

Service Oriented Architecture (SOA) - The Basics

SOA: the false, the ideal, the real
  • False: SOA equals web services.
    SOA equals distributed services.
  • Ideal: SOA cleanly partitions and consistently represent business services.
  • Real: SOA is a fundamental change in the way we do business.

Real SOA

  • Changed mindset: service-oriented context for business logic.
  • Changed automation logic: service-oriented applications.
  • Changed infrastructure: service-oriented technologies.
  • A top-down organization transformation requiring real commitment.

SOA Characteristics

  • Loosely coupled: minimizes dependencies between services.
  • Contractual: adhere to agreement on service descriptions.
  • Autonomous: control the business logic they encapsulate.
  • Abstract: hide the business logic from the service consumers.
  • Reusable: divide business logic into reusable services.
  • Composable: facilitate the assembly of composite services.
  • Stateless: minimize retained information specific to an activity.
  • Discoverable: self-described so that they can be found and assessed.

Potential Benefits

  • Based on open standards.
  • Supports vendor diversity.
  • Fosters intrinsic interoperability.
  • Promotes discovery.
  • Promotes federation.
  • Fosters inherent reusability.
  • Emphasizes extensibility.
  • Promotes organizational agility.
  • Supports incremental implementation.
  • Technical architecture that adheres to and supports the principles of service orientation.

Common Misperceptions

  • SOA is just Web services.
  • SOA is just a marketing term.
  • SOA is just distributed computing.
  • SOA is a magic global solution to general interoperability.

Common Pitfalls

  • Not basing SOA on standards.
  • Not creating a transition plan.
  • Not starting with a solid XML foundation architecture and skill set.
  • Not understanding SOA performance requirements.
  • Not understanding web services security.

Summing Up SOA

  • Not a magic trick.
  • Not a magic solution.
  • Not an easy thing to do correctly.
  • The wavelet of the present.
  • The wave of the future.
  • A useful architectural concept.
  • A potential business facilitator.

Resources

  • Douglas K. Barry, Web Services and Service-Oriented Architectures: the savvy manager’s guide.
  • Thomas Erl, Service-Oriented Architecture: concepts, technology and design.
  • Thomas Erl, Service-Oriented Architecture: a field guide to integrating XML and web services.

Disable Right Click On SharePoint Site

Follow these steps to prevent users from right-clicking in your sharepoint site.

  • Add a content editor web .
  • Add the following piece of code in the source editor of the content editor web part.

<HTML>
<BODY OnContextMenu = "return false;">
No Right Click on this Page.
</BODY>
</HTML>

  • Save the content editor web part.

Users are now prevented from the right click option on the page.

Reconnecting Content Databases in MOSS 2007

After the failover of a SharePoint Products and technologies database, it is required to reconnect the content databases. The following are the databases that are required to be reconnected,
  • Content database
  • Admin database
  • Configuration Database
  • Search Database
  • Shared Services Provider

The following stsadm commands are used to reconnect to the content database after a failover, the deletecontentdb will remove the reference to the old database server and the addcontentdb will add the new database as the content database

stsadm –o deletecontentdb –url [Site] -databasename -databaseserver [Old Principal]

stsadm –o addcontentdb –url [Site] -databasename -databaseserver [Old Principal]

Reconnecting content databases can also be done using Central Administration,

  1. Navigate to Central Administration.
  2. Navigate to Application management page
  3. Click on the Content Databases.
  4. Select the content database that has failed-over.
  5. In the Manage Content Databases page, choose the Remove content database option, and then click OK.
  6. Select the Add a content database option, and enter the required details.
  7. Replace the Database Server box with the new principal server, and then click OK.

Globalization In .NET

Globalization refers to the process with which an application or software will be designed and developed so as to make it run across all platforms and all sites with minimum or no modification to the software application. The software is very amenable to customisation so as to suit to the location-specific conditions and it is also capable of providing information based on the varied inputs and the location-specific operating system.Under any normal circumstance, there will be two processes in Globalization and they are customisation or localisation of the application and internationalizing the application codes so as to meet the standards of the local culture and other related matters.

In internationalization process the application code base will be same and the efforts will be on jobs such as translating, storing, retrieving and to make the application user friendly for the selected locale. In any given place the culture and the language will always be different and besides this you should also take into account the other factors such as time zone, normal date pattern usage, cultural and language environments, currencies, telephone numbers, and so many other factors that are specific to the locale.

In globalization the process of internationalization enables you to remove from the code base and the presentation layer all the contents and make you to use only a single presentation layer and single code base with a common contents that can suit any culture. The internationalization process will aid you to keep all the contents in a common place with an idea of making it easily accessible by the programme codes and the results can easily be populated all over presentation layer and the application with ease and efficiently.

In addition to the above, the internationalization process also enables you to store the contents and all the collected inputs from the user in a user friendly format and in a highly secured manner without compromising any standards pertaining to the local culture. The internationalization process is one step before any attempt for localising the application to suit to the local needs.

With the help of the localization process of globalization, you can make your application adaptable to the various location specific conditions and it will be easy for you to translate and re-format your application to suit to your new location and that too without changing any of the codes. Further, you can make use of the process for rectifying any of the reported bugs and for fine tuning the application for running smoothly without any hitch.

The globalization process also makes use of the locally prevailing information on culture where the software or the application is to be installed and maintained. The locational details and the language used in that particular area constitute to culture information and for working with any culture based information the namespace concept is utilised and the System.Globalization, System.Resources and System.Threading are the available namespaces in .NET Framework.

Out of the various namespaces, the System.Globalization namespace constitute classes that are used to hold information relating to region or country, the local language used, type of calendars, date format used, numbers, currency, etc., all in a meticulously arranged fashion and all these classes are used while developing the globalized (internationalized) applications.

You can use advanced globalization functionalities with the assistance of classes such as StringInfo and TextInfo classes and the various functionalities include text element processing and surrogate support systems.

The System.Resources namespace constitutes interfaces and classes that are very helpful for developers and maintenance experts in creating, storing, retrieving, and managing various resources used in the application that are culture and location-specific.

The System.Threading namespace constitutes interfaces and classes that aid in multithreaded programming. The classes that are used in this type of SystemThreading namespace are also useful in accessing data and for synchronization of thread activities.

Pros & Cons: Custom Templates and Site Definitions

Customization of site definitions holds the following advantages over custom templates:

  • Data is stored directly on the Web servers, so performance is typically better.
  • A higher level of list customization is possible through direct editing of a SCHEMA.XML file.
  • Certain kinds of customization to sites or lists require use of site definitions, such as introducing new file types, defining view styles, or modifying the drop-down Edit menu.

Site definition disadvantages include the following:

  • Customization of site definition requires more effort than creating custom templates.
  • It is difficult to edit a site definition after it has been deployed.
  • Doing anything other than adding code can break existing sites.
  • Users cannot apply a SharePoint theme through a site definition.
  • Users cannot create two lists of the same type with different default content.
  • Customizing site definitions requires access to the file system of the front-end Web server.

Custom templates hold the following advantages over customization of site definitions:

  • Custom templates are easy to create.
  • Almost anything that can be done in the user interface can be preserved in the template.
  • Custom templates can be modified without affecting existing sites that have been created from the templates.
  • Custom templates are easy to deploy.

Custom template disadvantages include the following:

  • Custom templates are not created in a development environment.
  • They are less efficient in large-scale environments.
  • If the site definition on which the custom template is based does not exist on the front-end server or servers, the custom template will not work.

Difference Between MOSS 2007 and WSS 3.0

It always astounds me what Microsoft are willing to bundle with their software, Analysis Services or SSIS (SQL Server Integration Services) with SQL Server for example! They haven't stopped, they bundle Windows Sharepoint Services with Windows Server 2003, with is basically a fully functional SharePoint Server, however they still have SharePoint Server 2007 as a product. So what are the differences? At first glance it doesn't appear to be much, however for tight integration into the enterprise, it seems that MOSS 2007 (Microsoft Office SharePoint Server) is a must! I will highlight the most interesting bits that MOSS 2007 has over and above WSS 3.0 (Windows Sharepoint Services) :
  • User Profiles support - Allows each user to store profile information

  • Site Manager - Manage Navigation, Security and look and feel with drag an drop functionality

  • Enterprise Search Tools - numerous tools to search Sharepoint Sites and Portals across entire enterprises

  • Business Data Catalog - The Business Data Catalog (BDC) tightly integrates external data, providing access to external data residing within other business applications, and enabling the display of, and interaction with external data

  • Business data search - Search data residing in your business applications using the BDC

  • Business Data Web Parts - Used for viewing lists, entities, and related information retrieved through the Business Data Catalog

  • Business document workflow support - Automate document review, approval, signature collection, and issue tracking using workflow applications

  • Retention and auditing policies - Allows customized information management policies to control retention period, expiration, and auditing

  • Browser-based forms - Integration with InfoPath, allows integration on to SharePoint of created forms and surveys.

  • Integrated, flexible spreadsheet publishing - Allows information workers to easily choose what they want to share with others and determine how others can interact with published spreadsheets.

  • Share, manage, and control spreadsheets - Provides access to spreadsheet data and analysis through server-calculated, interactive spreadsheets from a Web browser. Can help to protect any sensitive or proprietary information embedded in documents, such as financial models, and audits their usage.

  • Web-based business intelligence using Excel Services - Allows spreadsheets to be broadly and easily shared. Fully interactive, data-bound spreadsheets including charts, tables, and PivotTable views can be created as part of a portal, dashboard, or business scorecard.

  • Data Connection Libraries - Document Libraries storing ODCs (Office Data Connections), Making one single location for all data connections.

  • Business Data actions - Easily create actions that open Web pages, display the user interfaces of other business applications, launch InfoPath forms, and perform other common tasks.

  • Integrated business intelligence dashboards - Rich, interactive BI dashboards that assemble and display business information from disparate sources by using built-in Web parts, Excel spreadsheets, Reporting Services, or a collection of business data connectivity Web Parts.

  • Report Center - Provides consistent management of reports, spreadsheets, and data connections.

  • Key performance indicators - A KPI web Part can connect to Analysis Services, Excel Spreadsheets, SharePoint Lists, or manual entered data.

  • Notification service - Improved allowing workflow users to receive emails by default, and improved triggering and filtering

  • Single Sign-On (SSO) - Allows the User to log onto a variety of applications with a single user name and password, therefore integrating back office applications, and helps pre-population with integration to the Profile part of MOSS 2007.

  • Social Networking Part - Connect to Public My Site pages to help establish connections between colleagues with common interests

  • Personal Site Support - Allows Users to create Personal Web Sites

  • Content syndication - Use RSS feeds to syndicate content managed in a portal site.