Learn IT

Free learning anything to everything in Information Technology.

Prototype - As a Design Guideline

"A picture can replace a thousand words; a prototype can save you lots of meetings and possibly avoid a project failure."

A prototype is a dynamic view of the system, while a requirements documents and design documents create a static view. It is the same with blueprints; they are a static model. A small-scale model is a dynamic view. Using a dynamic view, you can see things in action; you can add another dimension to the design phase: the time dimension.

The most important benefit of using prototypes is that it helps promote communication with the client, project managers and other developers.

Prototypes help in tracking the requirements back and forth. They ensure that you implement what is needed, no more and no less.

Building Blocks of Design

There are three high level elements that are considered for design:
  • Patterns
    A pattern is a specification for addressing a common problem in solution design. Patterns are not algorithms, they are higher level and more broadly applicable. Adopting a widely accepted pattern as part of solution design can help us address not only the problem we recognize, but also the related problems we may not recognize on first glance.

  • Frameworks
    Occasionally, you may find targeted reference implementations of patterns that may be useful, and they are often in the form of a framework. In classic object-oriented design, a framework is a set of abstract classes to be incorporated into and reused as part of a software application. The current thinking is that a framework also may be a set of abstract data constructs, rules, or processes. A framework is different from a pattern in that a framework is something real that can be incorporated into and used as a foundational element of your solution—it is commonly the implementation of a pattern or specification.
    A framework provides guidance beyond that of a pattern, and typically provides deployable elements that can be used as the foundation for your solution. The more well- understood the framework, the easier it will be for an organization to support it over time.

  • Components
    Components are encapsulated elements of a system (hardware, software, network, etc.) and are by definition not case-specific. Components can be wrapped into your solution seamlessly, or can be managed as separate entities, regardless of deployment environment. Incorporating well-understood components into your solution definition can save time in delivery and increase the quality of your solution, but components may have associated support costs that can be considerable.
    Most components can be satisfactorily configured to meet our needs without any custom work. Incorporating a component and customizing it beyond the standard configuration is risky—you should understand the cost associated with supporting it going forward, and the risk of losing vendor-provided support.

Enterprise Architecture

The definition of an architecture used in ANSI/IEEE Std 1471-2000 is: "the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution."

An enterprise architecture (EA) is a conceptual tool that assists organizations with the understanding of their own structure and the way they work. It provides a map of the enterprise and is a route planner for business and technology change.

Normally an enterprise architecture takes the form of a comprehensive set of cohesive models that describe the structure and the functions of an enterprise. Important uses of it are in systematic IT planning and architecting, and in enhanced decision making.

The individual models in an EA are arranged in a logical manner, and this provides an ever-increasing level of detail about the enterprise, including:

  • Its objectives and goals.
  • Its processes and organization.
  • Its systems and data.
  • The technology used.

Software Architecture

Software Architecture can be defined in terms of building blocks and software components. These building blocks are software components, frameworks, IDE’s, SDK’s, and Commercial off the shelf (COTS) packages.

The primary role of a Software Architect is to choose the components, integrate the ones that can be incorporated, and lead the team in creating custom supporting code to link the components in times where there are no obvious connectors from one component to another and build something that will benefit the business.

WSS 3.0 - WorkFlows

A workflow allows you to attach a business process to items in Windows SharePoint Services 3.0. This process can control almost any aspect of an item in Windows SharePoint Services 3.0, including the life cycle of that item. For example, you could create a simple workflow that routes a document to a series of users for approval.

Workflows can be as simple or complex as your business processes require. You can create workflows that the user initiates, or workflows that Windows SharePoint Services 3.0 automatically initiate based on some event, such as when an item is created or changed.

Windows SharePoint Services 3.0 workflows are made available to end-users at the list or document-library level. Workflows can be added to documents or list items. Workflow can also be added to content types. Multiple workflows may be available for a given item. Multiple workflows can run simultaneously on the same item, but only one instance of a specific workflow can run on a specific item at any given time. For example, you might have two workflows, SpecReview and LegalReview, available for a specific content type, Specification. Although both workflows can run simultaneously on a specific item of the Specification content type, you can't have two instances of the LegalReview workflow running on the same item at the same time.

WSS 3.0 & MOSS 2007 - WebParts

Web Parts in Windows SharePoint Services provide developers with a way to create user interface elements that support both customization and personalization. A site owner or a site member with the appropriate permissions can customize Web Part Pages using a browser or Microsoft Office SharePoint Designer 2007 by adding, reconfiguring, and removing Web Parts.

The term customization implies that changes are seen by all site members. Individual users can further personalize Web Part Pages by adding, reconfiguring, and removing Web Parts. The term personalization implies that these changes will be seen only by the user that made them. Developing custom Web Parts provides an easy and powerful way to extend Windows SharePoint Services sites.

Because the Windows SharePoint Services Web Part infrastructure is now built on top of the ASP.NET 2.0 Web Parts control set, you can reuse your knowledge of ASP.NET programming to create quick and robust custom Web Parts.

Following are some ways in which you can use custom Web Parts:

  • Creating custom properties you can display and modify in the user interface.

  • Improving performance and scalability. A compiled custom Web Part runs faster than a script.

  • Implementing proprietary code without disclosing the source code.

  • Securing and controlling access to content within the Web Part. Built-in Web Parts allow any users with appropriate permissions to change content and alter Web Part functionality. With a custom Web Part, you can determine the content or properties to display to users, regardless of their permissions.

  • Making your Web Part connectable, allowing Web Parts to provide or access data from other connectable Web Parts.

  • Interacting with the object models that are exposed in Windows SharePoint Services. For example, you can create a custom Web Part to save documents to a Windows SharePoint Services document library.

  • Controlling the cache for the Web Part by using built-in cache tools. For example, you can use these tools to specify when to read, write, or invalidate the Web Part cache.

  • Benefiting from a rich development environment with debugging features that are provided by tools such as Microsoft Visual Studio 2005.

  • Creating a base class for other Web Parts to extend. For example, to create a collection of Web Parts with similar features and functionality, create a custom base class from which multiple Web Parts can inherit. This reduces the overall cost of developing and testing subsequent Web Parts.

  • Controlling the implementation of the Web Part. For example, you can write a custom server-side Web Part that connects to a back-end database, or you can create a Web Part that is compatible with a broader range of Web browsers.

WSS 3.0 - Content Types & Site Columns

Windows SharePoint Services 3.0 provides two new tools to help you organize and standardize your data: content types and site columns.

Content Types
Content types—a core concept used throughout the functionality and services offered in Windows SharePoint Services 3.0—are designed to help users organize their SharePoint content in a more meaningful way. A content type is a reusable collection of settings you want to apply to a certain category of content. Content types enable you to manage the metadata and behaviors of a document or item type in a centralized, reusable way.

For example, consider the following two types of documents: software specifications and legal contracts. It is reasonable that you might want to store documents of those two types in the same document library. However, the metadata you would want to gather and store about each of these document types would be very different. In addition, you would most likely want to assign very different workflows to the two types of documents.

Content types enable you to store multiple, different types of content in the same document library or list. In the preceding example, you could define two content types named Specification and Contract. Each content type could include different columns for gathering and storing item metadata, as well as have different workflows assigned to it. Yet items of both content types could be stored in the same document library.

You can further extend content type functionality by using content types to assign additional settings, such as workflows or even custom attributes, to your items.

Because you can define content types independently of any specific list or document library, you can make a given content type available for the lists on multiple SharePoint sites. This enables you to centrally define and manage the types of content you store in your site collection. For example, you could use your Specification content type to ensure that all software specifications track the same metadata, even if those specifications are stored across multiple sites.
Content types are independent of file formats. For document libraries, you can specify a document template; when the user requests a new document of this content type, Windows SharePoint Services creates a document based on the template. However, users can still upload a document based on a different template, or even of a completely different file type.

Site Columns
Site columns provide a central, reusable model for column definition. When you create a site column, each list that uses this column has the same definition, and you do not have to do the tedious work of reproducing the column in each list.

A site column is a reusable column definition, or template, that you can assign to multiple lists across multiple SharePoint sites. Site columns decrease rework and help you ensure consistency of metadata across sites and lists. For example, suppose you define a site column named Customer. Users can add that column to their lists and reference it in their content types. This ensures that the column has the same attributes, at least to start with, wherever it appears.

Additionally, site columns provide you with the simplicity of a single maintenance point. For example, you can create a status site column, which might contain multiple choices of an enterprise's specific statuses, and implement the column in dozens of project master lists across the site collection. If you add a new status, you can modify the site column instead of having to modify each list that contains a status column.

Much like site content types, you define a site column at the site level, independent of any actual list or content type.

When you add a column to a list, Windows SharePoint Services copies the site column locally onto the list as a list column. You can then make changes to the list column; these changes apply to the column only as it behaves on that list.

In certain situations, you may want to modify the column for a specific list. For this reason, you still have the option of one-off customization of columns at the list level. For example, suppose all projects within your company's Information Technology department have an additional status of On Hold—Waiting for Hardware. You could add this status to the column within the IT department's master project list.

You can also create your own list columns, directly on a list. Either way, list columns apply only to the list to which you add them; they cannot be added to multiple lists.

You can reference a site or list column in a content type.

Features Of Common Language Runtime

The .NET Framework provides a run-time environment called the common language runtime, which runs the code and provides services that make the development process easier.


The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.



With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.



The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.



The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.



In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.



The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.



While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.



The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.



Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework

The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:


  • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.

  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.

  • To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.

  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.

  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.

  • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.


Components of .NET Framework



The .NET Framework has two main components:


The common language runtime :


The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code.


The .NET Framework Class Library:


The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.



The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.



For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic.



Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage.

Creating Database Objects Using Managed Code (Microsoft .NET 2.0)

One of the neat features of SQL Server 2005 is the integration with the .NET CLR. The integration of CLR with SQL Server extends the capability of SQL Server in several important ways. This integration enables developers to create database objects such as stored procedures, user defined functions, and triggers by using modern object-oriented languages such as VB.NET and C#.

In this post, I will demonstrate how to create the stored procedures using C#. Before looking at the code, let us understand the pros and cons of using managed language in the database tier to create server side objects.

T-SQL Vs Managed Code

Although T-SQL, the existing data access and manipulation language, is well suited for set-oriented data access operations, it also has limitations. It was designed more than a decade ago and it is a procedural language rather than an object-oriented language. The integration of the .NET CLR with SQL Server enables the development of stored procedures, user-defined functions, triggers, aggregates, and user-defined types using any of the .NET languages.

This is enabled by the fact that the SQL Server engine hosts the CLR in-process. All managed code that executes in the server runs within the confines of the CLR. The managed code accesses the database using ADO.NET in conjunction with the new SQL Server Data Provider. Both Visual Basic .NET and C# are modern programming languages offering full support for arrays, structured exception handling, and collections.

Developers can leverage CLR integration to write code that has more complex logic and is more suited for computation tasks using languages such as Visual Basic .NET and C#. Managed code is better suited than Transact-SQL for number crunching and complicated execution logic, and features extensive support for many complex tasks, including string handling and regular expressions. T-SQL is a better candidate in situations where the code will mostly perform data access with little or no procedural logic.

Creating CLR Based Stored Procedures

For the purposes of this example, create a new SQL Server Project using Visual C# as the language of choice in Visual Studio 2005. Since you are creating a database project, you need to associate a data source with the project. At the time of creating the project, Visual Studio will automatically prompt you to either select an existing database reference or add a new database reference. Choose pubs as the database. Once the project is created, select Add Stored Procedure from the Project menu. In the Add New Item dialog box, enter Authors.cs and click Add button. After the class is created, modify the code in the class to look like the following.



using System;
using System.Data;
using System.Data.Sql;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public class Authors
{
[SqlProcedure]
public static void GetAuthors()
{
SqlPipe sp = SqlContext.Pipe;
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = "Select DatePart(second, GetDate()) " +
" As timestamp,* from authors";
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}

[SqlProcedure]
public static void GetTitlesByAuthor(string authorID)
{
string sql = "select T.title, T.price, T.type, " +
"T.pubdate from authors A" +
" inner join titleauthor TA on A.au_id = TA.au_id " +
" inner join titles T on TA.title_id = T.title_id " +
" where A.au_id = '" + @authorID + "'";
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlPipe sp = SqlContext.Pipe;
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = sql;
SqlParameter paramauthorID = new
SqlParameter("@authorID", SqlDbType.VarChar, 11);
paramauthorID.Direction = ParameterDirection.Input;
paramauthorID.Value = authorID;
cmd.Parameters.Add(paramauthorID);
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}
}
Let us examine the above lines of code. The above code starts by importing the required namespaces and then declares a class named Authors. There are two important classes in the Microsoft.SqlServer.Server namespace that are specific to the in-proc provider:
  • SqlContext: This class encapsulates the extensions required to execute in-process code in SQL Server 2005. In addition it provides the transaction and database connection which are part of the environment in which the routine executes.
  • SqlPipe: This class enables routines to send tabular results and messages to the client. This class is conceptually similar to the Response class found in ASP.NET in that it can be used to send messages to the callers.

The Authors class contains two static methods named GetAuthors and GetTitlesByAuthor. As the name suggests, the GetAuthors method simply returns all the authors from the authors table in the pubs database and the GetTitlesByAuthor method returns all the titles for a specific author.

Inside the GetAuthors method, you start by getting reference to the SqlPipe object by invoking the Pipe property of the SqlContext class.

SqlPipe sp = SqlContext.Pipe;

Then you open the connection to the database using the SqlConnection object. Note that the connection string passed to the constructor of the SqlConnection object is set to "context connection=true" meaning that you want to use the context of the logged on user to open the connection to the database.

using (SqlConnection conn = new SqlConnection("context connection=true"))

Here open the connection to the database using the Open() method.

conn.Open();

Then you create an instance of the SqlCommand object and set its properties appropriately.

SqlCommand cmd = new SqlCommand();cmd.CommandType = CommandType.Text;cmd.Connection = conn;cmd.CommandText = "Select DatePart(second, GetDate()) " + " As timestamp,* from authors";

Finally you execute the sql query by calling the ExecuteReader method of the SqlCommand object.

SqlDataReader rdr = cmd.ExecuteReader();

Then using the SqlPipe object, you then return tabular results and messages to the client. This is accomplished using the Send method of the SqlPipe class.
sp.Send(rdr);
The Send method provides various overloads that are useful in transmitting data through the pipe to the calling application. Various overloads of the Send method are:

  • Send (ISqlDataReader) - Sends the tabular results in the form of a SqlDataReader object.
  • Send (ISqlDataRecord) - Sends the results in the form of a SqlDataRecord object.
  • Send (ISqlError) - Sends error information in the form of a SqlError object.
  • Send (String) - Sends messages in the form of a string value to the calling application.

Both the methods in the Authors class utilize one of the Send methods that allows you to send tabular results to the client application in the form of a SqlDataReader object. Since the GetTitlesByAuthor method implementation is very similar to the GetAuthors method, I will not be discussing that method in detail.

Now that the stored procedures are created, deploying it is very simple and straightforward. Before deploying it, you need to build the project first. To build the project, select Build->Build from the menu. This will compile all the classes in the project and if there are any compilation errors, they will be displayed in the Error List pane. Once the project is built, you can then deploy it onto the SQL Server by selecting Build->Deploy from the menu. This will not only register the assembly in the SQL Server but also deploy the stored procedures in the SQL Server. Once the stored procedures are deployed to the SQL Server, they can then be invoked from the data access layer, which is the topic of focus in the next section.

Before executing the stored procedure, ensure you execute the following sql script using SQL Server Management Studio to enable managed code execution in the SQL Server.

EXEC sp_configure 'clr enabled', 1;

RECONFIGURE WITH OVERRIDE;

GO

Designing N-Tier Client/Server Architecture

Introduction

Designing N-Tier client/server architecture is no less complex than developing two-tier architecture, however the N-Tier architecture, produces a far more flexible and scalable client/server environment. In two-tier architecture, the client and the server are the only layers. In this model, both the presentation layer and the middle layer are handled by the client. N-Tier architecture has a presentation layer and three separate layers - a business logic layer and a data access logic layer and a database layer. The next section discusses each of these layers in detail.

Different Layers of an N-Tier Application

In a typical N-Tier environment, the client implements the presentation logic (thin client). The business logic and data access logic are implemented on an application server(s) and the data resides on database server(s). N-tier architecture is typically thus defined by the following layers:
  • Presentation Layer: This is a front-end component, which is responsible for providing portable presentation logic. Since the client is freed of application layer tasks, which eliminates the need for powerful client technology. The presentation logic layer consists of standard ASP.NET web forms, ASP pages, documents, and Windows Forms, etc. This layer works with the results/output of the business logic layer and transforms the results into something usable and readable by the end user.

  • Business Logic Layer: Allows users to share and control business logic by isolating it from the other layers of the application. The business layer functions between the presentation layer and data access logic layers, sending the client's data requests to the database layer through the data access layer.

  • Data Access Logic Layer: Provides access to the database by executing a set of SQL statements or stored procedures. This is where you will write generic methods to interface with your data. For example, you will write a method for creating and opening a SqlConnection object, create a SqlCommand object for executing a stored procedure, etc. As the name suggests, the data access logic layer contains no business rules or data manipulation/transformation logic. It is merely a reusable interface to the database.

  • Database Layer: Made up of a RDBMS database component such as SQL Server that provides the mechanism to store and retrieve data.

Steps to Implement ClickOnce Deployment in .NET 2.0

  • You create a Windows forms application and use the Publish option to deploy the application onto any of the following locations: File System, Local Web Server, FTP Site, or a Remote Web Site.

  • Once the application is deployed onto the target location, the users of the application can browse to the publish.htm file and install the application onto their machine. Note that publish.htm file is the entry point for installing the application and this will be discussed in the later part of this article.

  • Once the user has installed the application, a shortcut icon will be added to the desktop and the application will also be listed in the Control Panel/Add Remove Programs.

  • When the user launches the application again, the manifest will contain all the information to decide if the application should go to the source location and check for updates to the original application. Let us say, for instance, a newer version of the application is available, it will be automatically downloaded and made available to the user. Note that when the new version is downloaded, it is performed in a transacted manner meaning that either the entire update is downloaded or nothing is downloaded. This will ensure that the application integrity is preserved.

ClickOnce Deployment In .NET Framework 2.0

It is very common among the developers of previous generations to choose web applications over rich Windows UIs because of the deployment challenges associated with deploying a Smart Client Windows Forms application. However with the release of Visual Studio 2005, Microsoft has released a new technology named ClickOnce that is designed to solve the deployment issues for a windows forms application. This new technology not only provides an easy application installation mechanism but also enables easy deployment of upgrades to existing applications.

Since the introduction of the powerful server side web technologies such as ASP, JSP, ASP.NET, developers have shown more interest in building web applications rather than in windows applications. The factors that attracted the developers toward web applications can be summarized as follows:


  • A web application is ubiquitous, making it accessible in all the places where an internet connection is available.

  • The second and most important factor is the deployment. With web applications, there is no need to deploy any software on the client side. All the client application needs is just the browser. This makes it possible for the developers to easily deploy updates to the existing web application without impacting the client machines.

If you talk to the developers, you will find that the main reason for preference for web applications over windows applications is the second point in the above list. Even though this is true with traditional applications, Microsoft is making every attempt to ensuring that windows applications can be deployed and updated with the same ease as the web applications.

You can see proofs of this in the initial release of .NET Framework when Microsoft introduced the deployment of windows forms application through HTTP. Using this approach, you could simply use HREF HTML element to point to a managed executable (.exe). Then when you click on the HREF link, Internet Explorer can automatically download and install the executable on the client machine. Even though this approach sounds very promising, it also presents some interesting challenges.

One of the most important challenges is the downloading of the updated code through the HTTP. Since this process was not transacted, it was possible for the application to be left in an inconsistent state.

Moreover there was no way for you to specify if the application could work in offline mode apart from the traditional online mode. Combined with the operational mode issue, this approach also did not provide the ability to create shortcuts that can be used to launch the application. Even though this approach presented itself with a lot of issues, it could still be used in controlled environments.

However for complex multi-assembly dependant windows forms applications, you needed a transacted and easily updateable way of deployment. This is exactly what the ClickOnce technology introduced with .NET Framework 2.0 provides.

2D Graphics Techniques

2D graphics models may combine geometric models (also called vector graphics), digital images (also called raster graphics), text to be typeset (defined by content, font style and size, color, position, and orientation), mathematical functions and equations, and more. These components can be modified and manipulated by two-dimensional geometric transformations such as translation, rotation, scaling.

In object-oriented graphics, the image is described indirectly by an object endowed with a self-rendering method—a procedure which assigns colors to the image pixels by an arbitrary algorithm. Complex models can be built by combining simpler objects, in the paradigms of object-oriented programming.


Direct painting

A convenient way to create a complex image is to start with a blank "canvas" raster map (an array of pixels, also known as a bitmap) filled with some uniform background color and then "draw", "paint" or "paste" simple patches of color onto it, in an appropriate order. In particular, the canvas may be the frame buffer for a computer display.
Some programs will set the pixel colors directly, but most will rely on some 2D graphics library and/or the machine's graphics card, which usually implement the following operations

  • paste a given image at a specified offset onto the canvas;

  • write a string of characters with a specified font, at a given position and angle;

  • paint a simple geometric shape, such as a triangle defined by three corners or ,a circle with given center and radius;

  • draw a line segment, arc, or simple curve with a virtual pen of given width.


Extended color models

Text, shapes and lines are rendered with a client-specified color. Many libraries and cards provide color gradients, which are handy for the generation of smoothly-varying backgrounds, shadow effects, etc.. (See also Gouraud shading). The pixel colors can also be taken from a texture, e.g. a digital image (thus emulating rub-on screentones and the fabled "checker paint" which used to be available only in cartoons).

Painting a pixel with a given color usually replaces its previous color. However, many systems support painting with transparent and translucent colors, which only modify the previous pixel values. The two colors may also be combined in fancier ways, e.g. by computing their bitwise exclusive or. This technique is known as inverting color or color inversion, and is often used in graphical user interfaces for highlighting, rubber-band drawing, and other volatile painting—since re-painting the same shapes with the same color will restore the original pixel values.

Layers

The models used in 2D computer graphics usually do not provide for three-dimensional shapes, or three-dimensional optical phenomena such as lighting, shadows, reflection, refraction, etc.. However, they usually can model multiple layers (conceptually of ink, paper, or film; opaque, translucent, or transparent—stacked in a specific order. The ordering is usually defined by a single number (the layer's depth, or distance from the viewer).

Layered models are sometimes called 2 1/2-D computer graphics. They make it possible to mimic traditional drafting and printing techniques based on film and paper, such as cutting and pasting; and allow the user to edit any layer without affecting the others. For these reasons, they are used in most graphics editors. Layered models also allow better anti-aliasing of complex drawings and provide a sound model for certain techniques such as mitered joints and the even-odd rule.

Layered models are also used to allow the user to suppress unwanted information when viewing or printing a document, e.g. roads and/or railways from a map, certain process layers from an integrated circuit diagram, or hand annotations from a business letter.

In a layer-based model, the target image is produced by "painting" or "pasting" each layer, in order of decreasing depth, on the virtual canvas. Conceptually, each layer is first rendered on its own, yielding a digital image with the desired resolution which is then painted over the canvas, pixel by pixel. Fully transparent parts of a layer need not be rendered, of course. The rendering and painting may be done in parallel, i.e. each layer pixel may be painted on the canvas as soon as it is produced by the rendering procedure.

Layers that consist of complex geometric objects (such as text or polylines) may be broken down into simpler elements (characters or line segments, respectively), which are then painted as separate layers, in some order. However, this solution may create undesirable aliasing artifacts wherever two elements overlap the same pixel.

2D Computer Graphics

2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. The word may stand for the branch of computer science that comprises such techniques, or for the models themselves.


Raster graphic sprites and masks 2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc.. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics (whose approach is more akin to photography than to typography).


In many domains, such as desktop publishing, engineering, and business, a description of a document based on 2D computer graphics techniques can be much smaller than the corresponding digital image—often by a factor of 1/1000 or more. This representation is also more flexible since it can be rendered at different resolutions to suit different output devices. For these reasons, documents and illustrations are often stored or transmitted as 2D graphic files.


2D computer graphics started in the 1950s, based on vector graphics devices. These were largely supplanted by raster-based devices in the following decades. The PostScript language and the X Window System protocol were landmark developments in the field.

Subfields Of Computer Graphics

Geometry

Geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on the exterior of the object, boundary representations are most common in computer graphics. Two dimensional surfaces are a good analogy for the objects most often used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. These representations are Lagrangian, meaning the spatial locations of the samples are independent. In recent years, however, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).

Subfields Of Geometry

  • Constructive solid geometry - Process by which complicated objects are modelled with implicit geometric objects and boolean operations.

  • Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.

  • Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.

  • Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.

  • Subdivision surfaces Out-of-core mesh processing - another recent field which focuses on mesh datasets that do not fit in main memory.

Animation

Animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically most interest in this area has been focused on parametric and data-driven models, but in recent years physical simulation has experienced a renaissance due to the growing computational capacity of modern machines.

Subfields Of Animation

  • Performance capture
  • Character animation
  • Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)

Rendering

Rendering converts a model into an image either by simulating light transport to get physically-based photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are:
  • Transport (how much light gets from one place to another) and
  • Scattering (how surfaces interact with light).
Transport

Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

Scattering

Models of scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering.
Shading can be broken down into two orthogonal issues, which are often studied independently:
  • Scattering : How light interacts with the surface at a given point.
  • Shading : How material properties vary across the surface.

The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

Other subfields

  • Physically-based rendering - concerned with generating images according to the laws of geometric optics.
  • Real time rendering - focuses on rendering for interactive applications, typically using specialized hardware like GPUs.
  • Non-photorealistic rendering
  • Relighting - recent area concerned with quickly re-rendering scenes.

Computer Graphics

Computer graphics is a sub-field of computer science and is concerned with digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and
image processing.


Definition

Computer graphics broadly studies the manipulation of visual and geometric information using computational techniques. Computer graphics as an academic discipline focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues.

Major subfields in computer graphics might be:
  1. Geometry: studies ways to represent and process surfaces.
  2. Animation: studies with ways to represent and manipulate motion.
  3. Rendering: studies algorithms to reproduce light transport.
  4. Imaging: studies image acquisition or image editing.

Types Of Operating System

Generally,there are four Types of Operating System:

Real-time Operating System:

A real-time operating system (RTOS) is an operating system that guarantees a certain capability within a specified time constraint. For example, an operating system might be designed to ensure that a certain object was available for a robot on an assembly line. In what is usually called a "hard" real-time operating system, if the calculation could not be performed for making the object available at the designated time, the operating system would terminate with a failure. In a "soft" real-time operating system, the assembly line would continue to function but the production output might be lower as objects failed to appear at their designated time, causing the robot to be temporarily unproductive.

In general, real-time operating systems are said to require:

  • Multitasking
  • Process threads that can be prioritized.
  • A sufficient number of interrupt levels.

Real-time operating systems are often required in small embedded operating systems that are packaged as part of microdevices. Some kernels can be considered to meet the requirements of a real-time operating system. However, since other components, such as device drivers, are also usually needed for a particular solution, a real-time operating system is usually larger than just the kernel.

Single-user, single-tasking operating system:

As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm O.S. for Palm handheld computers is a good example of a modern single-user, single-task operating system.

Single-user, multi-tasking operating system:

This is the type of operating system most people use on there desktop and laptop computers today. Windows 98 and the Mac O.S. are both examples of an operating system that will let a single user has several programs in operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail message.

Multi-user operating systems:

A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS, and mainframe operating systems, such as MVS, are examples of multi-user operating systems. It's important to differentiate here between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves aren't true multi-user operating systems. The system administrator is the only user for Windows 2000 or Netware. The network support and the entire remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.

Functions of Operating System

In any computer, The Operating System will perform the following funtions:

  • Controls the backing store and peripherals such as disk drives and printers.

  • Controls the loading and running of programs.

  • Organises the use of memory between programs.

  • Organises processing time between programs and users.

  • Organises priorities between program and users.

  • Maintains security and access rights of users.

  • Deals with errors and user instructions.

On a personal computer the operating system will:

  • Deal with the transfer of programs in and out of memory.

  • Allow the user to save files to a backing store.

  • Control the transfer of data to peripherals such as printers.

  • Provide the interface between user and computer - for example, Windows XP and OSX.

In a larger computer such as a main frame the operating system works on the same principles.

Technical Approach for Migrating VB 6.0 Application to VB .NET

If you upgrade a Visual Basic 6.0 project group or an n-tier application to Visual Basic .NET, you must upgrade one project or tier at a time.

If your three-tier application includes a client component, a business component, and a data access component, you should upgrade the application in the following order:
  1. Client component, Business component, Data access component
  2. Business component, Data access component
  3. Data access component

In an n-tier application, always upgrade the client tier first, and then upgrade other tiers on the dependency tree. You should follow this order for two reasons:

  • This approach allows you to keep the application working. When you upgrade the client, you break and work with only one component of the application. All of the other components continue to work the same way that they did previously. With this approach, you isolate the work area. Alternately, if you upgrade the data tier first, suddenly you break the data tier and the components that depend on the data tier.

  • Visual Basic 6.0 locks type libraries (TypeLibs). This creates a problem if you need to rebuild the TypeLib or recompile the underlying dynamic-link library (DLL). If you upgrade the business tier first and then upgrade the client, you must continually stop and restart Visual Basic 6.0 every time you change the middle tier. Consider the following workflow:
  1. Upgrade the middle tier. Change the Visual Basic 6.0 client to access the middle tier. Run the middle tier.
  2. Change the Visual Basic 6.0 client to access the middle tier. Run the middle tier.
  3. Run the middle tier.

If you want to change the .NET DLL, you must then close Visual Basic 6.0, recompile in .NET, restart Visual Basic 6.0, and so on. You can avoid this problem if you upgrade the client first and then upgrade the middle tier.

To upgrade each Visual Basic 6.0 application, use the Upgrade tool that is included with Visual Basic .NET. The Upgrade tool is started when you use Visual Basic .NET to open a Visual Basic 6.0 project. When you use the Upgrade tool, the Visual Basic 6.0 project is not changed, and a new Visual Basic .NET project is created. Before you upgrade a Visual Basic 6.0 project, it is best to prepare it for upgrade.

Migration Strategy for Upgrading VB 6.0 Application to VB .NET

When we start developing any new software solution, certain steps are taken. We begin with a plan, identify processes, gather requirements, and eventually build the architecture of the solution. Once things start taking shape, we start with development. Why do we choose this path? We all know that the path for doing the analysis and design up front has been proven to save a lot of time and cost for software development. In order to migrate projects from any prior version of Visual Basic, the path for analysis and design up front yields the best results.

The analysis part is slightly different in this case. We begin by studying the current application, and try to identify code blocks that require changes. In order to migrate your VB applications, it is not recommended that you directly convert your existing applications to .NET and fix the converted code in .NET. It is always better to take the existing application to the "Migration Ready" stage.

Here are the steps for migrating applications from VB 6.0 to VB .NET:

  1. Evaluate the project and create a migration strategy.
  2. Make the changes in VB 6.0 project and create a "Migration Strategy."
  3. Migrate using the Visual Basic .NET Migration tool.
  4. If the changes are not at par, make more changes and use the Migration tool (repeat Steps 2 and 3 as necessary).
  5. Get developers at speed and make changes in .NET.
  6. Build the .NET solution.

Migrating Applications from VB 6.0 to VB .NET

Introduction

Microsoft Visual Basic has had many evolutions since its original release, Visual Basic 1.0. The release of Visual Basic .NET is the biggest evolution yet. The language has been redesigned to take advantage of the .NET Framework. By leveraging the features that the .NET Framework provides, Visual Basic supports language features such as code inheritance, visual forms inheritance, and multi-threading. The object model is more extensive than earlier versions, and Visual Basic .NET totally integrates with the .NET Framework. Therefore, interaction between components written in other .NET languages is very efficient.

Benefits Reaped:

  • These new features open new doors for the Visual Basic developer: With Web Forms and ADO .NET, you now can rapidly develop scalable Web sites; with inheritance, the language now truly supports object-oriented programming; Windows Forms natively supports accessibility and visual inheritance; and deploying your applications is now as simple as copying your executables and components from directory to directory.

  • Visual Basic .NET is now fully integrated with the other Microsoft Visual Studio .NET languages. Not only can you develop application components in different programming languages, your classes also can now inherit from classes written in other languages using cross-language inheritance. With the unified debugger, you can now debug multiple language applications, irrespective of whether they are running locally or on remote computers. Finally, whatever language you use, the Microsoft .NET Framework provides a rich set of APIs for Microsoft Windows® and the Internet.

  • There were two options to consider when designing Visual Basic .NET—retrofit the existing code base to run on top of the .NET Framework, or build from the ground up, taking full advantage of the platform. To deliver the features most requested by customers (for example, inheritance and threading), to provide full and uninhibited access to the platform, and to ensure that Visual Basic moves forward into the next generation of Web applications, the right decision was to build from the ground up on the new platform. For example, many of the new features found in Windows Forms could have been added to the existing code base as new controls or more properties. However, this would have been at the cost of all the other great features inherent to Windows Forms, such as security and visual inheritance.

  • One of Microsoft's major goals was to ensure Visual Basic code could fully interoperate with code written in other languages, such as Microsoft Visual C# or Microsoft Visual C++, and enable the Visual Basic developer to harness the power of the .NET Framework simply, without resorting to the programming workarounds traditionally required to make Windows APIs work. Visual Basic now has the same variable types, arrays, user-defined types, classes, and interfaces as Visual C++ and any other language that targets the Common Language Runtime; however, we had to remove some features, such as fixed-length strings and non-zero based arrays from the language.

  • Visual Basic is now a true object-oriented language; some unintuitive and inconsistent features such as GoSub/Return and DefInt have been removed from the language.

  • The result is a re-energized Visual Basic, which will continue to be the most productive tool for creating Windows-based applications, and is now positioned to be the best tool for creating the next-generation Web sites.

Different Types Of Operating Systems

A basic list of the different types of operating systems


GUI

Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly navigated by using a computer mouse

Some examples of GUI Operating Systems

  • System 7.x
  • Windows 98
  • Windows CE

Multi-user

A multi-user Operating System allows for multiple users to use the same computer at the same time and/or different times.

Some examples of multi-user Operating Systems

  • Linux
  • Unix
  • Windows 2000

Multiprocessing

An Operating System capable of supporting and utilizing more than one computer processor.
Some examples of multiprocessing Operating Systems
  • Linux
  • Unix
  • Windows 2000

Multitasking

An Operating system that is capable of allowing multiple software processes to run at the same time.
Some examples of multitasking Operating Systems.
  • Unix
  • Windows 2000

Multithreading

Operating systems that allow different parts of a software program to run concurrently. Operating systems that would fall into this category are:
  • Linux
  • Unix
  • Windows 2000

Graphical User Interface

Most modern computer systems contain Graphical User Interfaces. In some computer systems the GUI is integrated into the kernel—for example, in the original implementations of Microsoft Windows and Mac OS, the graphical subsystem was actually part of the kernel. Other operating systems, some older ones and some modern ones, are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980's UNIX, VMS and many others had operating systems that were built this way. Today Linux, and Mac OS X are also built this way.


Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly found setup on most Unix and Unix-like (BSD, Linux, Minix) systems. Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to open source-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).


Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 2001.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Networking

Current operating systems generally support a variety of networking protocols. Most are capable of using the TCP/IP networking protocols. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections.


Many operating systems also support one or more vendor-specific legacy networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Device driver

A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.


The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, OSes essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these OS mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating systems' point of view for any person.

Please follow these links for details on other tasks performed by Operating System:

Memory Management

Process Management

Disk and File Management

Networking

Security

Graphical User Interface

Device Driver Management

Security

Many operating systems include some level of security. Security is based on the two ideas that:


The operating system provides access to a number of resources, directly or indirectly, such as files on a local disk, privileged system calls, personal information about users, and the services offered by the programs running on the system;


The operating system is capable of distinguishing between some requesters of these resources who are authorized (allowed) to access the resource, and others who are not authorized (forbidden). While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. Requesters, in turn, divide into two categories:


Internal security: an already running program. On some systems, once a program is running it has no limitations, but commonly the program has an identity which it keeps and is used to check all of its requests for resources.


External security: a new request from outside the computer, such as a login at a connected console or some kind of network connection. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.


In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").


Internal security
Internal security can be thought of as protecting the computer's resources from the programs concurrently running on the system. Most operating systems set programs running natively on the computer's processor, so the problem arises of how to stop these programs doing the same task and having the same privileges as the operating system (which is after all just a program too). Processors used for general purpose operating systems generally have a hardware concept of privilege. Generally less privileged programs are automatically blocked from using certain hardware instructions, such as those to read or write from external devices like disks. Instead, they have to ask the privileged program (operating system kernel) to read or write. The operating system therefore gets the chance to check the program's identity and allow or refuse the request.

External security
Typically an operating system offers (or hosts) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the operating system's network address. Services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security.

At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.


An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.


Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Disk and file system management

Generally, operating systems include support for file systems, which allow the user to segment a given area of memory (sometimes RAM, but usually a disk) into individual files.


Modern file systems comprise a hierarchy of directories. While the idea is conceptually similar across all general-purpose file systems, some differences in implementation exist. Two noticeable examples of this are the character used to separate directories, and case sensitivity.
Unix demarcates its path components with a slash (/), a convention followed by operating systems that emulated it or at least its concept of hierarchical directories, such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but had already also adopted the CP/M convention of using slashes for additional options to commands, so instead used the backslash (\) as its component separator. Microsoft Windows continues with this convention; Japanese editions of Windows use ¥, and Korean editions use ₩.[1] Prior to Mac OS X, versions of Mac OS use a colon (:) for a path separator. RISC OS uses a period (.).

Unix and Unix-like operating systems allow for any character in file names other than the slash and NUL characters (including line feed (LF) and other control characters). Unix file names are case sensitive, which allows multiple files to be created with names that differ only in case. By contrast, Microsoft Windows file names are not case sensitive by default. Windows also has a larger set of punctuation characters that are not allowed in file names.


File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.


Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS, along with the FAT file systems, and NTFS.


Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The NTFS file system is the most efficient and reliable of the four Windows file systems, and as of Windows Vista, is the only file system which the operating system can be installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for flash drives.


Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT16, FAT32, NTFS, UDF, and other file systems, but cannot be installed to them.


Common to all these (and other) operating systems is support for file systems typically found on removable media. FAT12 is the file system most commonly found on floppy discs. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.

Process Management

A program running on a computer, whether visible to the user or not, is commonly referred to as a process. Process management refers to the facilities provided by the OS to support the creation, execution, and destruction of processes.

Creating a process involves allocating memory space for the process, loading the program's executable code into memory, telling the scheduler to run the program, and other tasks specific to the operating system.


The scheduler is the portion of the operating system that causes the program to be executed on the CPU, that is, 'scheduled' for execution. If the scheduler supports preemptive multitasking, it can change the program currently executing on the CPU to that of another program when it determines that the first program has executed for a predetermined amount of time. The amount of time allocated to a given process may depend on the needs of the process in question and the user's priority level for that process.


Destroying a process involves releasing any resources (including dynamically allocated memory, file references, and I/O ports) held by the program and ensuring that a different program is scheduled for execution.


Depending on the operating system, process management can be more simple or more complex than treated above. Several examples will illustrate:


The operating systems originally deployed on mainframes, and, much later, the original microcomputer operating systems, only supported one program at a time, requiring only a very basic scheduler. Each program was in complete control of the machine while it was running.


Multitasking (timesharing) first came to mainframes in the 1960's and to microcomputers in the mid-1980's, although, in both cases, for the most part, it wasn't until years later that the capability was perfected and made widely available.


Classic Mac OS generally supported only cooperative multitasking, Application programs running with classic Mac OS must yield CPU time to the scheduler by calling a special function for that purpose.


Classic AmigaOS did not properly track resources allocated by processes at runtime. If a process had to be terminated, the resources would be lost to programs run in the future, until the machine was restarted.

Please follow these links for details on other tasks performed by Operating System:
Memory Management
Process Management
Disk and File Management
Networking
Security
Graphical User Interface
Device Driver Management

Memory Management On Operating Systems

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.


Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it. Virtual memory does more than just make your computer's memory go further.Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

Garbage collection is the automated allocation, and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources The principal goals of the operating system's memory management are:
to provide memory space to enable several processes to be executed at the same time to provide a satisfactory level of performance for the system users to protect each programs resources to share (if desired) memory space between processes to make the addressing of memory space as transparent as possible for the programmer.


Memory management systems on multi-tasking operating systems usually deal with the following issues.


Relocation


In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references in the code of the program so that they always point to the right location in memory.


Protection


Memory protectionProcesses should not be able to reference the memory for another process without permission. This is called memory protection, and prevents malicious or malfunctioning code in one program from interfering with the operation of other running programs.


Sharing


Shared memoryEven though the memory for different processes is protected from each other different processes should be able to share information and therefore access the same part of memory.


Logical Organization


Programs are often organized in modules. Some of these modules could be shared between different programs, some are read only and some contain data that can be modified. The memory management is responsible for handling this logical organization that is different from the physical linear address space. One way to arrange this organization is segmentation.

Physical Organization


Memory is usually divided into fast primary storage and slow secondary storage. Memory management in the operating system handles moving information between these two levels of memory.

Please follow these links for details on other tasks performed by Operating System:

Memory Management

Process Management

Disk and File Management

Networking

Security

Graphical User Interface

Device Driver Management

Maximum Message Size For Web Services (.NET 3.5)

A new introduction to .NET 3.5 is the ability to limit the size of the incoming messages when using Web services. Apparently this is to help combat Denial of Service (DoS) attacks.

However, it is not clear how to change this setting, its simple when you know how. In you App.Config, or Web.Config you should have a Bindings section for each of web services references. Within this there are all sorts of useful settings, however by default the maximum message size is quite small, so to alter this you must change maxBufferSize and maxRecievedMessageSize. Now don't go crazy just up it to what you may need, this may be quite large if you are building all your internal applications through a web service layer.

C# Coalesce

Although this has been around for a long time and this is slightly off topic, I needed it this week, and just think it is worth mentioning. With objects you occasionally need to know if they are null, and if they are get something else, or do something else. This used to be very convoluted with .NET 1.1:

if (a != null)
{
return a;
}
else if (b != null)
{
return b;
}
else if (c!= null)
{
return c;
}
else
{
return new object();
}


Now you can simply use this (.NET 2.0 and above):

return a ?? b ?? c ?? new object();

Now you can not use this with types that get default values, such as Integer's, or boolean's, however still very usefull.

Programmatically retrieving Site Usage in MOSS 2007

Site Usage reports can be retrieved programmatically by using the GetUsageData method from the SPWeb class. This method would return a Data Table that contains information about the usage of the site based on the specified type of report, interval, number of columns and the last day to display. The GetUsageData method can be obtained from the Microsoft.SharePoint Namespace.

Alternatively site usage reports can also be viewed from the site settings menu. However this option is available when it is enabled from the Central Administration's Usage Analysis Logging. The log file is located by default in the 12 hive of SharePoint in the LOGS folder. However there is an option available in central administration to change the logging path.

Business Data Catalog Overview

The Business Data Catalog feature of Microsoft Office SharePoint Server 2007 provides an easy way to integrate business data from back-end server applications, such as SAP or Siebel, with your corporate portal to provide rich solutions for end users without writing any code. You register business data exposed in databases or through Web services in the Business Data Catalog by creating metadata that describes the database or Web service. The Business Data Catalog then uses this metadata to make the right calls into the data source to retrieve the relevant data.

After you register a data source in the Business Data Catalog, the business data entities are available for use by any of the following business data features:

  • Business Data Web Parts Generic Web Parts that display any entity from the Business Data Catalog, without deploying new code. The Web Parts provide customization, Web Part connections, and the standard Microsoft Windows SharePoint Services look-and-feel capabilities (paging, filtering, and style).

  • Business Data in Lists New field type that allows you to add any entity defined in the Business Data Catalog to a SharePoint list or document library.

  • Business Data Actions Business Data Actions bridge the gap between Office SharePoint Server 2007 and a native application user interface by providing a link back to the back-end data source. You can use Business Data Actions to build applications with write-back scenarios, such as a Customer Profile view that allows a user to update profile information directly in a back-end server application, such as SAP or Siebel. Actions are implemented as links, so you can also use actions to perform simple actions such as send an e-mail message or open a customer’s home page.

  • Business Data Search Offers full-text search of the data sources registered in the Business Data Catalog. You can create new search result types based on the specific data entities registered in the Business Data Catalog.

  • Business Data in User Profiles You can augment Office SharePoint Server 2007 user profiles from any external data source registered in the Business Data Catalog.

Excel Services - Architecture

Excel Services is built on the SharePoint products and technologies platform. There are three core components of Excel Services:
  1. Excel Calculation Service
  2. Excel Web Access
  3. Excel Web Service

Here is what each of these components do.

  • Excel Web Access – This is a web-part in SharePoint that performs the “rendering” (development team speak for “creating the HTML”) of Excel Workbooks on a web page. This is perhaps the most visible component for the end user. For those of you familiar with SharePoint, you can use it like any other web part in SharePoint to create a wide range of web pages.

  • Excel Web Services – This component provides the programmatic access that I talked about yesterday. It is a web service hosted in SharePoint. You can use methods in this web service to develop applications that incorporate calculations done by Excel Services and to automate the update of Excel spreadsheets.

  • Excel Calculation Service – This is the component that loads the spreadsheets, calculates them, refreshes external data, and maintains session state for interactivity. This is the heart of Excel Services.

Additionally, there is also a proxy that is used internally to handle the communication between the components on the web front end and the application server in multiple-server configurations. It also handles the load balancing in case there are multiple application servers in your installation.

These three components are divided in two major groups – those that live on a front-end server (which we refer to as a “web front end”), and those that live on a back-end application server. In the simplest of the configurations, all these components could be running on the same machine (we call this a “single box” installation). In a typical production environment with significant number of users, the components on the web front end and the application server would be on different machines. It is possible to scale (up or out) these components independently.

Security

Excel Services leverages the security infrastructure provided by SharePoint. Excel services uses SharePoint for authentication (who can log into the server) as well as authorization (who has access to which spreadsheet and the type of access; read, write, view only etc.). This provides a robust security environment for protecting your spreadsheets.

Performance and Scalability

Excel Services are optimized for scenarios in which multiple users access the same spreadsheets. We have done a lot of work to optimize for this scenario – for example, caching at multiple levels so that collective performance for a group of users is improved by caching spreadsheets as well as external data queried by the spreadsheets. All this is transparent to the end user except for the good response time. (Anticipating a question, we only share cached results between users that have the same rights.)

Excel Services can be scaled up by adding additional CPUs or memory to the server it runs on. It will take full advantage of multiple CPUs to handle multiple requests concurrently. It also supports 64bit CPUs. And it is possible to scale out the web front end and application server components independently, so you can adjust either based on server load and performance requirements. For example, if there is a bottleneck in rendering spreadsheets with Excel Web Access, then you can add more web front ends, and if there is a bottleneck is in calculations, then you can add more application servers to the farm. A lot will depend on the type, size of the workbooks and external data connections in the workbooks you are planning to use with Excel Services. For large deployments, some planning will need to go into the number of users as well as the anticipated workbook mix for the installation. The architecture is designed to meet the needs of a spectrum of deployments from a departmental to enterprise. The multi-tiered approach also allows for better security and isolation of services, for example in extranet scenarios.

Technical Architecture

Technical architecture is a part of software architecture, which focuses on how to deal with certain aspects of the software engineering process. It allows us to design better systems by:
  • Meeting system requirements and objectives: Both functional and non-functional requirements can be prioritized as "must have", "should have" or "want", where "must have" identifies properties that the system must have in order to be acceptable. An architecture allows us to evaluate and make tradeoffs among requirements of differing priority. Though system qualities (also known as non-functional requirements) can be compromised later in the development process, many will not be met if not explicitly taken into account at the architectural level.

  • Enabling flexible partitioning of the system: A good architecture enables flexible distribution of the system by allowing the system and its constituent applications to be partitioned among processors in many different ways without having to redesign the distributable component parts. This requires careful attention to the distribution potential of components early in the architectural design process.

  • Reducing cost of maintenance and evolution: Architecture can help minimize the costs of maintaining and evolving a given system over its entire lifetime by anticipating the main kinds of changes that will occur in the system, ensuring that the system's overall design will facilitate such changes, and localizing as far as possible the effects of such changes on design documents, code, and other system work products. This can be achieved by the minimization and control of subsystem interdependencies.

  • Increasing reuse and integration with legacy and third party software: An architecture may be designed to enable and facilitate the (re)use of certain existing components, frameworks, class libraries, legacy or third-party applications, etc..