Showing posts with label tutorials. Show all posts
Showing posts with label tutorials. Show all posts

Wednesday, February 3, 2010

Lightweight Directory Access Protocol (LDAP)


In computer networking, the Lightweight Directory Access Protocol, or LDAP ("ell-dap"), is a networking protocol for querying and modifying directory services running over TCP/IP. An LDAP directory usually follows the X.500 model: it is a tree of entries, each of which consists of a set of named attributes with values. While some services use a more complicated "forest" model, the vast majority use a simple starting point for their database organization.
An LDAP directory often reflects various political, geographic, and/or organizational boundaries, depending on the model chosen. LDAP deployments today tend to use Domain Name System (DNS) names for structuring the topmost levels of the hierarchy. Further into the directory might appear entries representing people, organizational units, printers, documents, groups of people or anything else which represents a given tree entry, or multiple entries.

Protocol overview

A client starts an LDAP session by connecting to an LDAP server, by default on TCP port 389. The client then sends operation requests to the server, and the server sends responses in return. With some exceptions the client need not wait for a response before sending the next request, and the server may then send the responses in any order.
The basic operations are, in order:
  • Bind - authenticate, and specify LDAP protocol version,
  • Start TLS - protect the connection with Transport Layer Security (TLS), to have a more secure connection,
  • Search - search for and/or retrieve directory entries,
  • Compare - test if a named entry contains a given attribute value,
  • Add a new entry,
  • Delete an entry,
  • Modify an entry,
  • Modify DN - move or rename an entry,
  • Abandon - abort a previous request,
  • Extended Operation - generic operation used to define other operations,
  • Unbind - close the connection, not the inverse of Bind.
In addition the server may send "Unsolicited Notifications" that are not responses to any request, e.g. before it times out a connection.
A common alternate method of securing LDAP communication is using an SSL tunnel. This is denoted in LDAP URLs by using the URL scheme "ldaps". The standard port for LDAP over SSL is 636.

For further reference for creating webservice click here

Tuesday, February 2, 2010

Features in .Net 3.0


A Preview of What is New in C# 3.0
 
On the heels of the Visual Studio 2005 and C# 2.0 releases, Microsoft has
given a sneak preview of what to expect in the version after the next: C#
3.0. Even though C# 3.0 is not even standardized yet, Microsoft provided
a preview release at its Professional Developers Conference (PDC) in
September so eager developers could try out some of the expected features.
This article discusses the following major new enhancements expected in C#
3.0:
 
  • Implicitly typed local variables
  • Anonymous types
  • Extension methods
  • Object and collection initializers
  • Lambda expressions
  • Query expressions
  • Expression Trees

Implicitly Typed Local Variables

C# 3.0 introduces a new keyword called "var". Var allows you to declare a
new variable, whose type is implicitly inferred from the expression used to
initialize the variable. In other words, the following is valid syntax in C# 3.0:
var i = 1;
The preceding line initializes the variable i to value 1 and gives it the type of
integer. Note that "i" is strongly typed to an integer—it is not an object or a VB6
variant, nor does it carry the overhead of an object or a variant.
 
To ensure the strongly typed nature of the variable that is declared with the var
keyword, C# 3.0 requires that you put the assignment (initializer) on the same
line as the declaration (declarator). Also, the initializer has to be an expression,
not an object or collection initializer, and it cannot be null. If multiple declarators
exist on the same variable, they must all evaluate to the same type at compile time.
 
Implicitly typed arrays, on the other hand, are possible using a slightly different
syntax, as shown below:
var intArr = new[] {1,2,3,4} ;
The above line of code would end up declaring intArr as int[].
The var keyword allows you to refer to instances of anonymous types (described
in the next section) and yet the instances are statically typed. So, when you create
instances of a class that contain an arbitrary set of data, you don't need to predefine
a class to both hold that structure and be able to hold that data in a statically typed
variable.
 

Anonymous Types

C# 3.0 gives you the flexibility to create an instance of a class without having to write
code for the class beforehand. So, you now can write code as shown below:
new {hair="black", skin="green", teethCount=64}
The preceding line of code, with the help of the "new" keyword, gives you a new type
that has three properties: hair, skin, and teethCount. Behind the scenes, the C#
compiler would create a class that looks as follows:
class __Anonymous1
{
private string _hair = "black";
private string _skin = "green";
private int _teeth = 64;
public string hair {get { return _hair; } set { _hair = value; }}
public string skin {get { return _skin; } set { _skin = value; }}
public int teeth {get { return _teeth; } set { _teeth = value; }}
}
In fact, if another anonymous type that specified the same sequence of names and
types were created, the compiler would be smart enough to create only a single
anonymous type for both instances to use. Also, because the instances are, as you
may have guessed, simply instances of the same class, they can be exchanged
because the types are really the same.
 
Now you have a class, but you still need something to hold an instance of the above
class. This is where the "var" keyword comes in handy; it lets you hold a statically
typed instance of the above instance of the anonymous type. Here is a rather simple
and easy use of an anonymous type:
 
var frankenstein = new {hair="black", skin="green", teethCount=64}

Extension Methods

Extension methods enable you to extend various types with additional static methods.
However, they are quite limited and should be used as a last resort—only where
instance methods are insufficient.
 
Extension methods can be declared only in static classes and are identified by the
keyword "this" as a modifier on the first parameter of the method. The following is
an example of a valid extension method:
public static int ToInt32(this string s)
{
return Convert.ToInt32(s) ;
}
If the static class that contains the above method is imported using the "using" keyword
, the ToInt32 method will appear in existing types (albeit in lower precedence to existing
instance methods), and you will be able to compile and execute code that looks as follows:
 
string s = "1";
int i = s.ToInt32();
This allows you to take advantage of the extensible nature of various built-in or defined
types and add newer methods to them.

Object and Collection Initializers

C# 3.0 is expected to allow you to include an initializer that specifies the initial values
of the members of a newly created object or collection. This enables you to combine
declaration and initialization in one step.
For instance, if you defined a CoOrdinate class as follows:
 
public class CoOrdinate
{
public int x ;
public int y;
}
You then could declare and initialize a CoOrdinate object using an object initializer,
like this:

var myCoOrd = new CoOrdinate{ x = 0, y= 0} ;
The above code may have made you raise your eyebrows and ask, "Why not just write
the following:"
var myCoOrd = new CoOrdinate(0, 0) ;
Note: I never declared a constructor that accepted two parameters in my class.
In fact, initializing the object using an object initializer essentially is equivalent to
calling a parameterless (default) constructor of the CoOrdinate object and then assigning the relevant values.
Similarly, you should easily be able to give values to collections in a rather concise and
compact manner in C# 3.0. For instance, the following C# 2.0 code:
List animals = new List();
animals.Add("monkey");
animals.Add("donkey");
animals.Add("cow");
animals.Add("dog");
animals.Add("cat");
Now can be shortened to simply:
List animals = new List {
"monkey", "donkey", "cow", "dog", "cat" } ;



Lambda Expressions: The Espresso of Anonymous Methods

C# 1.x allowed you to write code blocks in methods, which you could invoke
easily using delegates. Delegates are definitely useful, and they are used
throughout the framework, but in many instances you had to declare a method
or a class just to use one. Thus, to give you an easier and more concise way
of writing code, C# 2.0 allowed you to replace standard calls to delegates
with anonymous methods. The following code may have been written in .NET 1.1
or earlier:
 
class Program
{
delegate void DemoDelegate();
static void Main(string[] args)
{
DemoDelegate myDelegate = new DemoDelegate(SayHi);
myDelegate();
}
void SayHi()
{
Console.Writeline("Hiya!!") ;
}
}
In C# 2.0, using anonymous methods, you could rewrite the code as follows:  
class Program
{
delegate void DemoDelegate();
static void Main(string[] args)
{
DemoDelegate myDelegate = delegate()
{
Console.Writeline("Hiya!!");
};
myDelegate();
}
}
Whereas anonymous methods are a step above method-based delegate invocation,lambda expressions allow you to write anonymous methods in a more concise, functional syntax. 
You can write a lambda expression as a parameter list, followed by the => token,followed by an expression or statement block. The above code can now be replaced with the following code:
class Program
{
delegate void DemoDelegate();
static void Main(string[] args)
{
DemoDelegate myDelegate = () => Console.WriteLine("Hiya!!") ;
myDelegate();
}
}
Although Lambda expressions may appear to be simply a more concise way of writing anonymous methods, in reality they also are a functional superset of anonymous methods. Specifically, Lambda expressions offer the following functionality:
  • They permit parameter types to be inferred. Anonymous methods will require you to explicitly state each and every type.
  • They can hold either query expressions (described in the following section) or C# statements.
  • They can be treated as data using expression trees (described later).This cannot be done using Anonymous methods.  

Query Expressions

Even though further enhancements may be introduced in the coming months as C# 3.0 matures, the new features described in the preceding sections make it a lot easier to work with data inside C# in general. This feature, also known as LINQ (Language Integrated Query), allows you to write SQL-like syntax in C#.For instance, you may have a class that describes your data as follows:
public class CoOrdinate  
{ public int x ; public int y; }
You now could easily declare the logical equivalent of a database table inside C# as follows:
// Use Object and collection initializers
List coords = ... ;
And now that you have your data as a collection that implements IEnumerable, you easily can query this data as follows:
var filteredCoords =
from c in coords
where x == 1
select (c.x, c.y)
In the SQL-like syntax above, "from", "where", and "select" are query expressions that take advantage of C# 3.0 features such as anonymous types, extension methods, implicit typed local variables, and so forth.
This way, you can leverage SQL-like syntax and work with disconnected data easily.
Each query expression is actually translated into a C#-likeinvocation behind the scenes. For instance, the following:
where x == 1
Translates to this:
coords.where(c => c.x == 1)
As you can see, the above looks an awful lot like a lambda expression and extension method. C# 3.0 has many other query expressions and rules
that surround them.

 

Expression Trees

C# 3.0 includes a new type that allows expressions to be treated as data at runtime.This type, System.Expressions.Expression, is simply an in-memory representation of a lambda expression. The end result is that your code can modify and inspect lambda expressions at runtime.
The following is an example of an expression tree:
Expression filter = () => Console.WriteLine("Hiya!!") ;            
With the above expression tree setup, you easily can inspect the contents of the tree by using various properties on the filter variable.

One to Grow On

 

C# 3.0 offers incredible new features that make your work as an application developer and architect a lot easier, and yet it remains a programming language that lends itself to stricter and cleaner architecture.
C# 3.0 is in its infancy right now and it will mature in the coming months, but given the sizable impact its changes will have on the surrounding .NET Framework, its recommended architecture, and design patterns, definitely keep your eye on it.

New in Visual Studio 2008 (Support)


What is new in Visual Studio 2008
 
A quick list of some of the new features are:
  • Multi-Targeting support
  • Web Designer and CSS support
  • ASP.NET AJAX and JavaScript support
  • Project Designer
  • Data
  • LINQ – Language Integrated Query
The features listed and explained in this paper are not complete and this document intends to give you a forehand to start off with VS 2008.

1. Multi-Targeting Support

Earlier, each Visual Studio release only supported a specific version of the .NET Framework. For example, VS 2003 only works with .NET 1.1, and VS 2005 only works with .NET 2.0.
One of the major changes with the VS 2008 release is to support what Microsoft calls "Multi-Targeting". This means that Visual Studio will now support targeting multiple versions of the .NET Framework, and developers will be able to take advantage of the new features that Visual Studio provides without having to migrate their existing projects and deployed applications to use a new version of the .NET Framework.
Now when we open an existing project or create a new one with VS 2008, we can pick which version of the .NET Framework to work with. The IDE will update its compilers and feature-set to match the chosen .NET Framework.
Features, controls, projects, item-templates, and references that do not work with the selected version of the Framework will be made unavailable or will be hidden.
Unfortunately, support has not been included to work with Framework versions 1.1 and earlier. The present release supports 2.0/3.0 and 3.5 .NET Frameworks.
Microsoft plans to continue multi-targeting support in all future releases of Visual Studio.

Creating a New Project with Visual Studio 2008 that Targets .NET 2.0 Framework Library

The screenshots below depict the creation of a new web application targeting .NET 2.0 Framework. Choose File->New Project. As we see in the snapshot below in the top-right of the new project dialog, there is now a dropdown that allows us to choose which versions of the .NET Framework we want to target when we create the new project. The templates available are filtered depending on the version of the Framework chosen from the dropdown:

Can I Upgrade an Existing Project to .NET 3.5?

When we open a solution created using an older version of Visual Studio and Framework, VS 2008 would ask if migration is required. If we opt to migrate, then a migration wizard would start. If we wish to upgrade our project to target a newer version of the Framework at a later point of time, we can pull up the project properties page and choose the Target Framework. The required assemblies are automatically referenced. The snapshot below shows the properties page with the option Target Framework marked.

2. Web Designer, Editing and CSS Support

One feature that web developers will discover with VS 2008 is its drastically improved HTML designer, and the extensive CSS support made available.
The snapshots below depict some of the new web designer features in-built into VS 2008.

Split View Editing

In addition to the existing views, Design view and Code view, VS 2008 brings along the Split view which allows us to view both the HTML source and the Design View at the same-time, and easily make changes in any of the views. As shown in the image below, as we select a tag in code view, the corresponding elements/controls are selected in design view.

CSS Style Manager

VS 2008 introduces a new tool inside the IDE called "Manage Styles". This shows all of the CSS style sheets for the page.
It can be used when we are in any of the views - design, code and split views. Manage Styles tool can be activated by choosing Format -> CSS Styles -> Manage Styles from the menu. A snapshot of the same would look like the following:
Create a new style using the new style dialog window as show in the snapshot below.
Now, the style manager would show .labelcaption style as well in the CSS styles list. However, if we observe that the body element has a circle around it but the .labelcaption does not have one, this is because the style is not in use yet.
We will not select all the labels below and apply our new style .labelcaption.
We can choose to modify the existing style through GUI using "Modify style..." menu option in the dropdown menu as shown above or choose to hand edit the code by choosing the option "Go To Code".

CSS Source View Intellisense

The designer is equipped with the ability to select an element or control in design-view, and graphically select a rule from the CSS list to apply to it.
We will also find when in source mode that we now have intellisense support for specifying CSS class rules. The CSS Intellisense is supported in both regular ASP.NET pages as well as when working with pages based on master pages.

Code Editing Enhancements

Below is a non-exhaustive list of a few new code editing improvements. There are many more about which I don't know yet.

Transparent Intellisense Mode

While using VS 2005/2003 we often find ourselves escaping out of intellisense in order to better see the code around, and then go back and complete what we were doing.
VS 2008 provides a new feature which allows us to quickly make the intellisense drop-down list semi-transparent. Just hold down the "Ctrl" key while the intellisense drop-down is visible and we will be able to switch it into a transparent mode that enables us to look at the code beneath without having to escape out of Intellisense. The screenshot below depicts the same.

Organize C# Using Statements

One of the small, but a nice new feature in VS 2008 is support for better organizing using statements in C#. We can now select a list of using statements, right-click, and then select the "Organize Usings" sub-menu. When we use this command the IDE will analyze what types are used in the code file, and will automatically remove those namespaces that are declared but not required. A small and handy feature for code refactoring.

3. ASP.NET AJAX and JavaScript Support

JavaScript Intellisense

One new feature that developers will find with VS 2008 is its built-in support for JavaScript Intellisense. This makes using JavaScript and building AJAX applications significantly easier. A double click on HTML control in design mode would automatically create a click event to the button and would create the basic skeleton of the JavaScript function. As we see in the depicted image below, JavaScript Intellisense is inbuilt now. Other JavaScript Intellisense features include Intellisense for external JavaScript libraries and adding Intellisense hints to JavaScript functions.

JavaScript Debugging

One new JavaScript feature in VS 2008 is the much-improved support for JavaScript debugging. This makes debugging AJAX applications significantly easier. JavaScript debugging was made available in VS 2005 itself. However, we had to run the web application first to set the breakpoint or use the "debugger" JavaScript statement.
VS 2008 makes this much better by adding new support that allows us to set client-side JavaScript breakpoints directly within your server-side .aspx and .master source files.
We can now set both client-side JavaScript breakpoints and VB/C# server-side breakpoints at the same time on the same page and use a single debugger to step through both the server-side and client-side code in a single debug session. This feature is extremely useful for AJAX applications. The breakpoints are fully supported in external JavaScript libraries as well.

4. Few Other Features and Enhancements

Below is a list of few other enhancements and new features included in Microsoft Visual Studio 2008.

Project Designer

Windows Presentation Foundation (WPF) applications have been added to Visual Studio 2008. There are four WPF project types:
  • WinFX Windows Application
  • WinFX Web Browser Application
  • WinFX Custom Control Library
  • WinFX Service Library
When a WPF project is loaded in the IDE, the user interface of the Project Designer pages lets us specify properties specific to WPF applications.

Data

  • The Object Relational Designer (O/R Designer) assists developers in creating and editing the objects (LINQ to SQL entities) that map between an application and a remote database
  • Hierarchical update capabilities in Dataset Designer, providing generated code that includes the save logic required to maintain referential integrity between related tables
  • Local database caching incorporates an SQL Server Compact 3.5 database into an application and configures it to periodically synchronize the data with a remote database on a server. Local database caching enables applications to reduce the number of round trips between the application and a database server

LINQ – Language Integrated Query

LINQ is a new feature in VS 2008 that broadens great querying capabilities into the language syntax. LINQ introduces patterns for querying and updating data. A set of new assemblies are provided that enable the use of LINQ with collections, SQL databases, and XML documents.

Visual Studio 2008 Debugger

The Visual Studio 2008 debugger has been enhanced with the following features:
  • Remote debugging support on Windows Vista
  • Improved support for debugging multithreaded applications
  • Debugging support for LINQ programming
  • Debugging support for Windows Communications Foundation
  • Support for script debugging, including client-side script files generated from server-side script now appear in Solution Explorer

Reporting

Visual Studio 2008 provides several new reporting features and improvements such as:
  • New Report Projects: Visual Studio 2008 includes two new project templates for creating reporting applications. When we create a new Reports Application project, Visual Studio provides a report (.rdlc) and a form with a ReportViewer control bound to the report.
  • Report Wizard: Visual Studio 2008 introduces a Report Wizard, which guides us through the steps to create a basic report. After we complete the wizard, we can enhance the report by using Report Designer.
  • Expression Editor Enhancement: The Expression Editor now provides expressions that we can use directly or customize as required.
  • PDF Compression: The ReportViewer controls can now compress reports that are rendered or exported to the PDF format.

Getting Started With Hibernate

Hibernate works best with the Plain Old Java Objects programming model for persistent classes.

Hibernate is not restricted in its usage of property types, all Java JDK types and primitives (like String, char and Date) can be mapped, including classes from the Java collections framework. You can map them as values, collections of values, or associations to other entities. The id is a special property that represents the database identifer (primary key) of that class, Hibernate can use identifiers only internally, but we would lose some of the flexibility in our application architecture.

No special interface has to be implemented for persistent classes nor do you have to subclass from a special root persistent class. Hibernate also doesn't require any build time processing, such as byte-code manipulation, it relies solely on Java reflection and runtime class enhancement (through CGLIB). So, without any dependency of the POJO class on Hibernate, we can map it to a database table.

Following code sample represents a java object structure which represents the AppLabsUser table. Generally these domain objects contain only getters and setters methods. One can use Hibernate extension toolset to create such domain objects.

AppLabsUser.java package org.applabs.quickstart;

import java.io.Serializable;
import java.util.Date;
import org.apache.commons.lang.builder.ToStringBuilder;

public class AppLabsUser implements Serializable {
 

public void setName(String name) {
/** identifier field */
private Long id;

/** persistent field */
private String userName;

/** persistent field */
private String userPassword;

/** persistent field */
private String userFirstName;

/** persistent field */
private String userLastName;

/** persistent field */
private String userEmail;

/** persistent field */
private Date userCreationDate;

/** persistent field */
private Date userModificationDate;

/** full constructor */
public Applabsuser(String userName, String userPassword, String userFirstName, String userLastName, String userEmail, Date userCreationDate, Date userModificationDate) {
this.userName = userName;
this.userPassword = userPassword;
this.userFirstName = userFirstName;
this.userLastName = userLastName;
this.userEmail = userEmail;
this.userCreationDate = userCreationDate;
this.userModificationDate = userModificationDate;
}

/** default constructor */
public Applabsuser() {
}

public Long getId() {
return this.id;
}

public void setId(Long id) {
this.id = id;
}

public String getUserName() {
return this.userName;
}

public void setUserName(String userName) {
this.userName = userName;
}

public String getUserPassword() {
return this.userPassword;
}

public void setUserPassword(String userPassword) {
this.userPassword = userPassword;
}

public String getUserFirstName() {
return this.userFirstName;
}

public void setUserFirstName(String userFirstName) {
this.userFirstName = userFirstName;
}

public String getUserLastName() {
return this.userLastName;
}

public void setUserLastName(String userLastName) {
this.userLastName = userLastName;
}

public String getUserEmail() {
return this.userEmail;
}

public void setUserEmail(String userEmail) {
this.userEmail = userEmail;
}

public Date getUserCreationDate() {
return this.userCreationDate;
}

public void setUserCreationDate(Date userCreationDate) {
this.userCreationDate = userCreationDate;
}

public Date getUserModificationDate() {
return this.userModificationDate;
}

public void setUserModificationDate(Date userModificationDate) {
this.userModificationDate = userModificationDate;
}
public String toString() {
return new ToStringBuilder(this)
.append("id", getId())
.toString();
}

}// End of class

HIBERNATE - Features of Hibernate

Transparent persistence without byte code processing
  Transparent persistence
  JavaBeans style properties are persisted
  No build-time source or byte code generation / processing
  Support for extensive subset of Java collections API
  Collection instance management
  Extensible type system
  Constraint transparency
  Automatic Dirty Checking
  Detached object support
  Object-oriented query language
  Powerful object-oriented query language
  Full support for polymorphic queries
  New Criteria queries
  Native SQL queries
  Object / Relational mappings
  Three different O/R mapping strategies
  Multiple-objects to single-row mapping
  Polymorphic associations
  Bidirectional associations
  Association filtering
  Collections of basic types
  Indexed collections
  Composite Collection Elements
  Lifecycle objects
  Automatic primary key generation
  Multiple synthetic key generation strategies
  Support for application assigned identifiers
  Support for composite keys
  Object/Relational mapping definition
  XML mapping documents
  Human-readable format
  XDoclet support
  HDLCA (Hibernate Dual-Layer Cache Architecture)
  Thread safeness
  Non-blocking data access
  Session level cache
  Optional second-level cache
  Optional query cache
  Works well with others
  High performance
  Lazy initialization
  Outer join fetching
  Batch fetching
  Support for optimistic locking with versioning/timestamping
  Highly scalable architecture
  High performance
  No "special" database tables
  SQL generated at system initialization time
  (Optional) Internal connection pooling and PreparedStatement caching
  J2EE integration
  JMX support
  Integration with J2EE architecture (optional)
  New JCA support

Sunday, November 15, 2009

An Introduction to C++ Programming - Part 10

The data representation problem

In the file array as implemented last month, data was always stored in a raw binary format, exactly mirroring the bits as they lay in memory. This works fine for integers and such, but can be disastrous in other situations. Imagine a file array of strings (where string is a ``char*''). With the implementation from last month, the pointer value would be stored, not the data pointed to. When reading, a pointer value is read, and when dereferenced, whatever happens to be at the memory location pointed to (if anything) will be used (which is more than likely to result in a rather quick crash.) Anything with pointers is dangerous when stored in a raw binary format, yet we must somehow allow pointers in the array, and preferably so without causing problems for those using the array with built-in arithmetic types. How can this be done?
In part 4, when templates were introduced, a clever little construct called ``traits classes'' was shown. I then gave this rather terse description: ``A traits class is never instantiated, and doesn't contain any data. It just tells things about other classes, that is its sole purpose.'' Doesn't that smell like something we can use here? A traits class that tells how the data types should be represented on disk?
What do we need from such a traits class? Obviously, we need to know how much disk space each element will take, so a ``size'' member will definitely be necessary, otherwise we cannot know much disk space will be required. We also need to know how to store the data, and how to read it. The easiest way is probably to have member functions ``writeTo'' and ``readFrom'' in the traits class. Thus we can have something looking like this:

template class FileArrayElementAccess
{
public:
static const size_t size;
static void writeTo(T value, ostream& os);
static T readFrom(istream& is);
};
The array is then rewritten to use this when dealing with the data. The change is extremely minor. ``storeElement'' needs to be rewritten as:

template
void FileArray::storeElement(size_t index,
const T& element)
{
// what if index >= array_size?
typedef FileArrayElementAccess traits;
(*pstream).seekp(traits::size*index
+sizeof(array_size), ios::beg);
// what if seek fails?
traits::writeTo(element,*pstream);
// what if write failed?
// what if too much data was written?
}
The change for ``readElement'' is of course analogous. However, as indicated by the last comment, a new error possibility has shown up. What if the ``writeTo'' and ``readFrom'' members of the traits class are buggy and write or read more data to disk than they're allowed to? Since it's the user of the array that must write the traits class (at least for their own data types) we cannot solve the problem, but we can give the user a chance to discover that something went wrong. Unfortunately for writing, the error is extremely severe; it means that the next entry in the array will have its data destroyed... In the traits class, by the way, the constant ``size'', used for telling how many bytes in the stream each ``T'' will occupy, poses a problem with most C++ compilers today (modern ones mostly makes life so much easier.) The problem is that a static variable, and also a static constant, in a class, needs to reside somewhere in memory, and the class declaration is not enough for that. This problem is two-fold. To begin with, where should it be stored? It's very much up to whoever writes the class, but somewhere in the code, there must be something like:

const size_t ArrayFileElementAccess::size = ...;
where ``X'' is the name of the class dealt with by the particular traits specialisation. The second problem is that this is totally unnecessary. What we want is a value that can be used by the compiler at compile time, not a memory location to read a value from. As I mentioned, a modern compiler does make this much easier. In standard C++ it is allowed to write:

template<> class ArrayFileElementAccess
{
public:
const size_t size = ...;
...
};
Note that for some reason that I do not know, this construct is only legal if the type is a constant of an integral or enumeration type. ``size_t'' is such a type, it's some unsigned integral type, probably ``unsigned int'', but possibly ``unsigned long''. The expression denoted ``...'' must be possible to evaluate at compile time. Unless code is written that explicitly takes the address of ``size'', we need not give the constant any space to reside in. The odd construct ``template <>'' is also new C++ syntax, and means that what follows is a specialisation of a previously declared template. For old compilers, however, there's a work-around for integral values, no larger than the largest ``int'' value. We cheat and use an enum instead of a ``size_t''. This makes the declaration:

class ArrayFileElementAccess
{
public:
enum { size= ... };
...
};
This is a bit ugly, but it is perfectly harmless. The advantage gained by adding the traits class is flexibility and safety. If someone wants to use a file array for their own class, they're free to do so. However, they must first write a ``FileArrayElementAccess'' specialisation. Failure to do so will result in a compilation error. This early error detection is beneficial. The sloppy solution from last month would not yield any error until run-time, which means a (usually long) debugging session.

Several arrays in a file

What is needed in order to host several arrays in the same file? One way or the other, there must be a mechanism for finding out where one array begins and another ends. I think the simplest solution, is to let go of the file names, and instead make the constructors accept an ``fstream&''. We can then require that the put and get pointer of the stream must be where the array can begin, and we can in turn promise that the put and get pointer will be positioned at the byte after the array end. Of course, in addition to having a reference to the ``fstream'' in our class, we also need the ``home'' position, to seek relative to, when indexing the array. This becomes easy to write for us, it becomes easy to use as well. For someone requiring only one array in a file, there'll be slightly more code, an ``fstream'' object must be explicitly initialised somewhere, and passed to the constructor of the array, instead of just giving it a name. I think the functionality increase/code expansion exchange is favorable.
In order to improve the likelihood of finding errors, we can waste a few bytes of disk space by writing a well known header and trailer pattern at the beginning and end of the array (before the first element, and after the last one.) If someone wants to allocate an array using an existing file, we can find out if the get pointer is in place for an array start.
The constructor creating a file should, however, first try to read from the file to see if it exists. If it does, it should be created from the file, just like the constructor accepting a stream only does. If the read fails, however, we can safely assume that the file doesn't exist and should instead be created.
The change in the class definition, and constructor implementation is relatively straight forward, if long:

template
class FileArray
{
public:
FileArray(fstream& fs, size_t elements);
// create a new file.

FileArray(fstream& fs);
// use an existing file and get size from there
...
private:
void initFromFile(const char*);

fstream& stream;
size_t array_size; // in elements
streampos home;
};

template
FileArray::FileArray(fstream& fs, size_t elements)
: stream(fs),
array_size(elements)
{
// what if the file could not be opened?
// first try to read and see if there's a begin
// pattern. Either there is one, or we should
// get an eof.

char pattern[6];
stream.read(pattern,6);
if (stream.eof()) {
stream.clear(); // clear error state
// and initialise.

// begin of array pattern.
stream.write("ABegin",6);
// must store size of elements, as last month
const size_t elem_size
=FileArrayElementAccess::size;
stream.write((const char*)&elem_size,
sizeof(elem_size));
// and of course the number of elements
stream.write((const char*)&array_size,
sizeof(array_size));
// Now that we've written the maintenance
// stuff, we know what the home position is.

home = stream.tellp();

// Then we must go the the end and write
// the end pattern.

stream.seekp(home+elem_size*array_size);
stream.write("AEnd",4);

// set put and get pointer to past the end pos.
stream.seekg(stream.tellp());
return;
}

initFromFile(pattern); // shared with other
// stream constructor
if (array_size != elements) {
// Uh oh. The data read from the stream,
// and the size given in the constructor
// mismatches! What now?
stream.clear(ios::failbit);
}

// set put and get pointer to past the end pos.
stream.seekp(stream.tellg());
}

template
FileArray::FileArray(fstream& fs)
: stream(fs)
{
// First read the head pattern to see if
// it's right.
char pattern[6];
stream.read(pattern,6);
initFromFile(pattern);
// set put and get pointer to past the end pos.
stream.seekp(stream.tellg());
}

template
void FileArray::initFromFile(const char* p)
{
// Check if the read pattern is correct
if (strncmp(p,"ABegin",6)) {
// What to do? It was all wrong!
stream.clear(ios::failbit);
// for lack of better,
// set the fail flag.
return;
}
// OK, we have a valid array, now let's see if
// it's of the right kind.
size_t elem_size;
stream.read((char*)&elem_size,sizeof(elem_size));
if (elem_size != FileArrayElementAccess::size)
{
// wrong kind of array, the element sizes
// mismatch. Again, what to do? Let's set
// the fail flag for now.
stream.clear(ios::failbit);
// stupid name for the
// member function, right?
return;
}
// Get the size of the array. Can't do much with
// the size here, though.
stream.read((char*)&array_size,sizeof(array_size));
// Now we're past the header, so we know where the
// data begins and can set the home position.

home = stream.tellg();

stream.seekg(home+elem_size*array_size);

// Now positioned immediately after the last
// element.

char epattern[4];
stream.read(epattern,4);
if (strncmp(epattern,"AEnd",4)) {
// Whoops, corrupt file!
stream.clear(ios::failbit);
return;
}
// Seems like we have a valid array!
}
Other than the above, the only change needed for the array is that seeking will be done relative to ``home'' rather than the beginning of the file (plus the size of the header entries.) The new versions of ``storeElement'' and ``readElement'' become:

template
T FileArray::readElement(size_t index) const
{ // what if index >= max_elements?
typedef FileArrayElementAccess traits;
stream.seekg(home+index*traits::size);
// what if seek fails?

return traits::readFrom(stream);
// what if read fails?
// What if too much data is read?
}

template
void FileArray::storeElement(size_t index,
const T& element)
{ // what if index >= array_size?
typedef FileArrayElementAccess traits;
stream.seekp(home+traits::size*index);
// what if seek fails?
traits::writeTo(element,stream);
// what if write failed?
// what if too much data was written?
}

Temporary file array

Making use of a temporary file to store a file array that's not to be persistent between runs of the application isn't that tricky. The implementation so far makes use of a stream and known data about the beginning of the stream, number of elements and size of the elements. This can be used for the temporary file as well. The only thing we need to do is to create the temporary file first, open it with an fstream object, and tie the stream reference to that object, and remember to delete the file in the destructor.
What's the best way of creating something and making sure we remember to undo it later? Well, of course, creating a new helper class which creates the file in its constructor and removes it in its destructor. Piece of cake. The only problem is that we shouldn't always create a temporary file, and when we do, we can handle it a bit different from what we do with a ``global'' file that can be shared. For example, we know that we have exclusive rights to the file, and that it won't be reused, so there's no need for the extra information in the beginning and end. So, how's a temporary file created? The C++ standard doesn't say, and neither is there any support for it in the old de-facto standard. I don't think C does either. There are, however, two functions ``tmpnam'' and ``tempnam'' defined as commonly supported extensions to C. They can be found in . I have in this implementation chosen to use ``tempnam'' as it's more flexible. ``tempnam'' works like this: it accepts two string parameters named ``dir'' and ``prefix''. It first attempts to create a temporary file in the directory pointed to by the environment variable ``TMPDIR''. If that fails, it attempts to create it in the directory indicated by the ``dir'' parameter, unless it's 0, in which case a hard-coded default is attempted. It returns a ``char*'' indicating a name to use. The memory area pointed to is allocated with the C function ``malloc'', and thus must be deallocated with ``free'' and not delete[].
Over to the implementation details:
We add a class called temporaryfile, which does the above mentioned work. We also add a member variable ``pfile'' which is of type ``ptr''. Remember the ``ptr'' template from last month? It's a smart pointer that deallocates whatever it points to in its destructor. It's important that the member variable ``pfile'' is listed before the ``stream'' member, since initialisation is done in the order listed, and the ``stream'' member must be initialised from the file object owned by ``pfile''. We also add a constructor with the number of elements as its sole parameter, which makes use of the temporary file.

class temporaryfile
{
public:
temporaryfile();
~temporaryfile();
iostream& stream();
private:
char* name;
fstream fs;
};

temporaryfile::temporaryfile()
: name(::tempnam(".","array")),
fs(name, ios::in|ios::out|ios::binary)
{
// what if tmpnam fails and name is 0
// what if fs is bad?
}

temporaryfile::~temporaryfile()
{
fs.close();
::remove(name);
// what if remove fails?
::free(name);
}
In the above code, ``tempnam'', ``remove'' and ``free'' are prefixed with ``::``, to make sure that it's the names in global scope that are meant, just in case someone enhances the class with a few more member functions whose name might clash. For the sake of syntactical convenience, I have added yet another operator to the ``ptr'' class template:

template class ptr
{
public:
ptr(T* tp=0) : p(tp) {};
~ptr() { delete p; };
T* operator->(void) const { return p; };
T& operator*(void) const { return *p;};
private:
ptr(const ptr&);
ptr& operator=(const ptr&);
T* p;
};
It's the ``operator->'' that's new, which allows us to write things like ``p->x,'' where p is a ``ptr'', and the type ``X'' contains some member named ``x''. The return type for ``operator->'' must be something that ``operator->'' can be applied to. The explanation sounds recursive, but it makes sense if you look at the above code. ``ptr::operator->()'' returns an ``X*''. ``X*'' is something you can apply the built in ``operator->'' to (which gives you access to the elements.)

template
FileArray::FileArray(size_t elements)
: pfile(new temporaryfile),
stream(pfile->stream()),
array_size(elements),
home(stream.tellg())
{
const size_t elem_size=
FileArrayElementAccess::size;
// put a char just after the end to make
// sure there's enough free disk space.
stream.seekp(home+array_size*elem_size);
char c;
stream.write(&c,1);
// what to do if write fails?
// set put and get pointer to past the end pos
stream.seekg(stream.tellp());
}
That's it! The rest of the array works exactly as before. No need to rewrite anything else.

Code reuse

If you're an experienced C programmer, especially experienced with programming embedded systems where memory constraints are tough and you also have a good memory, you might get a feeling that something's wrong here.
What I'm talking about is something I mentioned the first time templates were introduced: ``Templates aren't source code. The source code is generated by the compiler when needed.'' This means that if we in a program uses FileArray, FileArray, FileArray and FileArray (where ``X'' and ``Y'' are some classes,) there will be code for all four types. Now, have a close look at the member functions and see in what way ``FileArray::FileArray(iostream& fs, size_t elements)'' differs from ``FileArray::FileArray(iostream& fs, size_t elements)''. Please do compare them.
What did you find? The only difference at all is in the handling of the member ``elem_size'', yet the same code is generated several times with that as the only difference. This is what is often referred to as the template code bloat of C++. We don't want code bloat. We want fast, tight, and slick applications.
Since the only thing that differs is the size of the elements, we can move the rest to something that isn't templatised, and use that common base everywhere. I've already shown how code reuse can be done by creating a separate class and have a member variable of that type. In this article I want to show an alternative way of reusing code, and that is through inheritance. Note very carefully that I did not say public inheritance. Public inheritance models ``is-A'' relationships only. We don't want an ``is-A'' relationship here. All we want is to reuse code to reduce code bloat. This is done through private inheritance. Private inheritance is used far less than it should be. Here's all there is to it. Create a class with the desired implementation to reuse and inherit privately from it. Nothing more, nothing less. To a user of your class, it matters not at all if you chose not to reuse code at all, reuse through encapsulation of a member variable, or reuse through private inheritance. It's not possible to refer to the descendant class through a pointer to the private base class, private inheritance is an implementation detail only, and not an interface issue.
To the point. What can, and what can not be isolated and put in a private base class? Let's first look at the data. The ``stream'' reference member can definitely be moved to the base, and so can the ``pfile'' member for temporary files. The ``array_size'' member can safely be there too and also the ``home'' member for marking the beginning of the array on the stream. By doing that alone we have saved just about nothing at all, but if we add as a data member in the base class the size (on disk) for the elements, and we can initialise that member through the ``FileArrayElementAccess::size'' traits member, all seeking in the file, including the initial seeking when creating the file array, can be moved to the base class. Now a lot has been gained. Left will be very little. Let's look at the new improved implementation:
Now for the declaration of the base class.

class FileArrayBase
{
public:
protected:
FileArrayBase(iostream& io,
size_t elements,
size_t elem_size);
FileArrayBase(iostream& io);
FileArrayBase(size_t elements, size_t elem_size);
iostream& seekp(size_t index) const;
iostream& seekg(size_t index) const;
size_t size() const; // number of elements
size_t element_size() const;
private:
class temporaryfile
{
public:
temporaryfile();
~temporaryfile();
iostream& stream();
private:
char* name;
fstream fs;
};
void initFromFile(const char* p);
ptr pfile;
iostream& stream;
size_t array_size;
size_t e_size;
streampos home;
};
The only surprise here should be the nesting of the class ``temporaryfile.'' Yes, it's possible to define a class within a class. Since the ``temporaryfile'' class is defined in the private section of ``FileArrayBase'', it's inaccessible from anywhere other than the ``FileArrayBase'' implementation. It's actually possible to nest classes in class templates as well, but few compilers today support that. When implementing the member functions of the nested class, it looks a bit ugly, since the surrounding scope must be used.

FileArrayBase::temporaryfile::temporaryfile()
: name(::tempnam(".","array")),
fs(name,ios::in|ios::out|ios::binary)
{
// what if tmpnam fails and name is 0
// what if fs is bad?
}

FileArrayBase::temporaryfile::~temporaryfile()
{
fs.close();
::remove(name);
// What if remove fails?
::free(name);
}

iostream& FileArrayBase::temporaryfile::stream()
{
return fs;
}
The implementation of ``FileArrayBase'' is very similar to the ``FileArray'' earlier. The only difference is that we use a parameter for the element size, instead of the traits class.

FileArrayBase::FileArrayBase(iostream& io,
size_t elements,
size_t elem_size)
: stream(io),
array_size(elements),
e_size(elem_size)
{
char pattern[sizeof(ArrayBegin)];
stream.read(pattern,sizeof(pattern));
if (stream.eof()) {
stream.clear(); // clear error state
// and initialize.
// begin of array pattern.
stream.write(ArrayBegin,sizeof(ArrayBegin));

// must store size of elements
stream.write((const char*)&elem_size,
sizeof(elem_size));

// and of course the number of elements
stream.write((const char*)&array_size,
sizeof(array_size));

// Now that we've written the maintenance
// stuff, we know what the home position is.
home = stream.tellp();

// Then we must go the the end and write
// the end pattern.

stream.seekp(home+elem_size*array_size);
stream.write(ArrayEnd,sizeof(ArrayEnd));

// set put and get pointer to past the end pos.
stream.seekg(stream.tellp());
return;
}
initFromFile(pattern); // shared with other
// stream constructor

if (array_size != elements) {
// Uh oh. The data read from the stream,
// and the size given in the constructor
// mismatches! What now?

stream.clear(ios::failbit);
}
if (e_size != elem_size) {
stream.clear(ios::failbit);
}
// set put and get pointer to past the end pos.
stream.seekp(stream.tellg());
}
To make life a little bit easier, I've assumed two arrays of char named ``ArrayBegin'' and ``ArrayEnd'', which hold the patterns to be used for marking the beginning and end of an array on disk.

FileArrayBase::FileArrayBase(iostream& io)
: stream(io)
{
char pattern[sizeof(ArrayBegin)];
stream.read(pattern,sizeof(pattern));
initFromFile(pattern);

// set put and get pointer to past the end pos.
stream.seekp(stream.tellg());
}

FileArrayBase::FileArrayBase(size_t elements,
size_t elem_size)
: pfile(new temporaryfile),
stream(pfile->stream()),
array_size(elements),
e_size(elem_size),
home(stream.tellg())
{
stream.seekp(home+array_size*e_size);
char c;
stream.write(&c,1);
// set put and get pointer to past the end pos.
stream.seekg(stream.tellp());
}

void FileArrayBase::initFromFile(const char* p)
{
// Check if the read pattern is correct
if (strncmp(p,ArrayBegin,sizeof(ArrayBegin))) {
// What to do? It was all wrong!
stream.clear(ios::failbit); // for lack of better,
// set the fail flag.
return;
}
// OK, we have a valid array, now let's see if
// it's of the right kind.
stream.read((char*)&e_size,sizeof(e_size));

// Get the size of the array. Can't do much with
// the size here, though.
stream.read((char*)&array_size,sizeof(array_size));

// Now we're past the header, so we know where the
// data begins and can set the home position.
home = stream.tellg();
stream.seekg(home+e_size*array_size);
// Now positioned immediately after the last
// element.
char epattern[sizeof(ArrayEnd)];
stream.read(epattern,sizeof(epattern));
if (strncmp(epattern,ArrayEnd,sizeof(ArrayEnd)))
{
// Whoops, corrupt file!
stream.clear(ios::failbit);
return;
}
// Seems like we have a valid array!
}

iostream& FileArrayBase::seekg(size_t index) const
{
// what if index is out of bounds?
stream.seekg(home+index*e_size);
// what if seek failed?
return stream;
}

iostream& FileArrayBase::seekp(size_t index) const
{
// What if index is out of bounds?
stream.seekp(home+index*e_size);
// What if seek failed?
return stream;
}

size_t FileArrayBase::size() const
{
return array_size;
}

size_t FileArrayBase::element_size() const
{
return e_size;
}
Apart from the tricky questions, it's all pretty straight forward. The really good news, however, is how easy this makes the implementation of the class template ``FileArray''.

template
class FileArray : private FileArrayBase
{
public:
FileArray(iostream& io, size_t size);// create one.
FileArray(iostream& io); // use existing array
FileArray(size_t elements); // create temporary
T operator[](size_t index) const;
FileArrayProxy operator[](size_t index);
size_t size() { return FileArrayBase::size(); };
private:
FileArray(const FileArray&); // illegal
FileArray& operator=(const FileArray&);
// illegal

T readElement(size_t index) const;
void storeElement(size_t index, const T& elem);
friend class FileArrayProxy;
};
Now watch this!

template
FileArray::FileArray(iostream& io, size_t size)
: FileArrayBase(io,
elements,
FileArrayElementAccess::size)
{
}

template
FileArray::FileArray(iostream& io)
: FileArrayBase(io)
{
// what if element_size is wrong?
}

template
FileArray::FileArray(size_t elements)
: FileArrayBase(elements,
FileArrayElementAccess::size)
{
}

template
T FileArray::operator[](size_t index) const
{
// what if index>= size()?
return readElement(index);
}

template
FileArrayProxy
FileArray::operator[](size_t index)
{
// what if index>= size()?
return FileArrayProxy(*this, index);
}

template
T FileArray::readElement(size_t index) const
{
// what if index>= size()?
iostream& s = seekg(index); // parent seekg
return FileArrayElementAccess::readFrom(s);
// what if read failed?
// What if too much data was read?
return t;
}

template
void FileArray::storeElement(size_t index,
const T& element)
{ // what if index>= size()?
iostream& s = seekp(index); // parent seekp
// what if seek fails?
FileArrayElementAccess::writeTo(element,s);
// what if write failed?
// What if too much data was written?
}
How much easier can it get? This reduced code bloat, and also makes the source code easier to understand, extend and maintain.

What can go wrong?

Already in the very beginning of this article series, part 1, I introduced exceptions; the C++ error handling mechanism. Of course exceptions should be used to handle the error situations that can occur in our array class. When I introduced exceptions, I didn't tell the whole truth about them. There was one thing I didn't tell, because at that time it wouldn't have made much sense. That one thing is that when exceptions are caught, dynamic binding works, or to use wording slightly more English-like, we can create exception class hierarchies with public inheritance, and we can choose what level to catch. Here's a mini example showing the idea:

class A {};
class B : public A {};
class C : public A {};
class B1 : public B{};

void f() (throw A); // may throw any of the above

void x()
{
try {
f();
}
catch (B& b) {
// **1
}
catch (C& c) {
// **2
}
catch (A& a) {
// **3
}
}
At ``**1'' above, objects of class ``B'' and class ``B1'' are caught if thrown from ``f''. In ``**2'' objects of class ``C'' (and descendants of C, if any are declared elsewhere) are caught. At ``**3'' all others from the ``A'' hierarchy are caught. This may seem like a curious detail of purely academic worth, but it's extremely useful. We can use abstraction levels for errors. For example, we can have a root class ``FileArrayException'', from which all other exceptions regarding the file array inherits. We can see that there are clearly two kinds of errors that can occur in the file array; abuse and environmental issues outside the control of the programmer. For abuse I mean things like indexing outside the valid bounds, and with environmental issues I mean faulty or full disks (Since there are several programs running, a check if there's enough disk space is still taking a chance. Even if there was enough free space when the check was made, that space may be occupied when the next statement in the program is executed.)
A reasonable start for the exception hierarchy then becomes:

class FileArrayException {};
class FileArrayLogicError
: public FileArrayException {};
class FileArrayRuntimeError
: public FileArray Exception {};
Here ``FileArrayLogicError'' are for clear violations of the not too clearly stated preconditions, and ``FileArrayRuntimeError'' for things that the programmer may not have a chance to do something about. In a perfectly debugged program, the only exceptions ever thrown from file arrays will be of the ``FileArrayRuntimeError'' kind. We can divide those further into:

class FileArrayCreateError
: public FileArrayRuntimeError {};
For whenever the creation of the array fails, regardless of why (it's not very easy to find out if it's a faulty disk or lack of disk space, for example.)

class FileArrayStreamError
: public FileArrayRuntimeError {};
If after creation, something goes wrong with a stream; for example if seeking or reading/writing fails.

class FileArrayDataCorruptionError
: public FileArrayRuntimeError {};
If an array is created from an old existing file, and we note that the header or trailer doesn't match the expected.

class FileArrayBoundsError
: public FileArrayLogicError {};
Addressing outside the legal bounds.

class FileArrayElementSizeError
: public FileArrayLogicError {};
If the read/write members of the element access traits class are faulty and either write too much (thus overwriting the data for the next element) or reads too much (in which case the last few bytes read will be garbage picked from the next element.) It's of course possible to take this even further. I think this is quite enough, though.
Now we have a reasonably fine level of error reporting, yet an application that wishes a coarse level of error handling can choose to catch the higher levels of the hierarchy only.
As an exercise, I invite you to add the throws to the code. Beware, however; it's not a good idea to add exception specifications to the member functions making use of the T's (since you cannot know which operations on T's that may throw, and what they do throw.) You can increase the code size and eligibility gain from the private inheritance of the implementation in the base by putting quite a lot of the error handling there.

Iterators

An iterator into a file array is something whose behavior is analogous to that of pointers into arrays. We want to be able to create an iterator from the array (in which case the iterator refers to the first element of the array.) We want to access that element by dereferencing the iterator (unary operator *,) and we want iterator arithmetic with integers.
An easy way of getting there is to let an iterator contain a pointer to a file array, and an index. Whenever the iterator is dereferenced, we return (*array)[index]. That way we even have error handling for iterator arithmetic that lead us outside the valid range for the array given for free from the array itself. The iterator arithmetics becomes simple too, since it's just ordinary arithmetics on the index type. The implementation thus seems easy; all that's needed is to define the operations needed for the iterators, and the actions we want. Here's my idea:
  • creation from array yields iterator referring to first element
  • copy construction and assignment are of course well behaved.
  • moving forwards and backwards with operator++ and operator--.
  • addition of array and ``long int'' value ``n'' yields iterator referring to n:th element of array.
  • iterator+=n (where n is of type long int) adds n to the value of the index in the iterator. This addition is never an error; it's dereferencing the iterator that's an error if the index is out of range. Operator -= is analogous.
  • iterator+n yields a new iterator referring to the iterator.index+n:th element of the array, and analogous for operator-.
  • iterator1-iterator2 yields a long int which is the difference between the indices of the iterators. If iterator1 and iterator2 refer to different arrays, it's an error and we throw an exception.
  • iterator1==iterator2 returns non-zero if the arrays and indices of iterator1 and iterator2 are equal.
  • iterator1!=iterator2 returns !(iterator1==iterator2)
  • *iterator returns whatever (*array)[index] returns, i.e a
  • leArrayProxy. * iterator[n] returns (*array)[index+n].
  • iterator1< iterator2.index. If the iterators refer to different arrays, it's an error and we throw an exception. Likewise for operator>.
  • iterator1>=iterator2 returns !(iterator1<=.
I think the above is an exhaustive list. Neither of the above is difficult. It's just a lot of code to write, and thus a good chance of making errors. With a little thought, however, quite a lot of code can be reused over and over, thus reducing the amount to write and also the risk for errors. As an example, a rule of thumb when writing a class for which an object ``o'' and some other value ``v'' the operations ``o+=v'', ``o+v'' and ``v+o'' are well defined and behaves like they do for the built in types (which they really ought to, unless you want to give the class users some rather unhealthy surprises) is to define ``operator+='' as a member of the class, and two versions of operator+ that are implemented with ``operator+=''. Here's how it's done in the iterator example:

template
class FileArrayIterator
{
public:
FileArrayIterator(FileArray& f);
FileArrayIterator& operator+=(long n);
FileArrayProxy operator*();
FileArrayProxy operator[](long n);
...
private:
FileArray* array;
unsigned long index;
};

template FileArrayIterator
operator+(const FileArrayIterator& i, long n);

template FileArrayIterator
operator+(long n, const FileArrayIterator& i);

template
FileArrayIterator::FileArrayIterator(
const FileArray& a
)
: array(&a),
index(0)
{
}

template
FileArrayIterator::FileArrayIterator(
const FileArrayIterator& i
)
: array(i.array),
index(i.index)
{
}

template
FileArrayIterator&
FileArrayIterator::operator+=(long n)
{
index+=n;
return *this;
}

template FileArrayIterator
operator+(const FileArrayIterator& i, long n)
{
FileArrayIterator it(i);
return it+=n;
}

template FileArrayIterator
operator+(long n, const FileArrayIterator& i)
{
FileArrayIterator it(i);
return it+=n;
}

template
FileArrayProxy FileArrayIterator::operator*()
{
return (*array)[index];
}

template
FileArrayProxy
FileArrayIterator::operator[](long n)
{
return (*array)[index+n];
}
Surely, the code for the two versions of ``operator+'' must be written, but since its behaviour is defined in terms of ``operator+='' it means that if we have an error, there's only one place to correct it. There's no need to display all the code here in the article, you can study it in the sources. The above shows how it all works, though, and as you can see, it's fairly simple.

Recap

This month the news in short was:
  • You can increase flexibility for your templates without sacrificing ease of use or safety by using traits classes.
  • Enumerations in classes can be used to have class-scope constants of integral type.
  • Modern compilers do not need the above hack. Defining a class-scope static constant of an integral type in the class declaration is cleaner and more type safe.
  • Standard C++ and even C, does not have any support for the notion of temporary files. Fortunately there are commonly supported extensions to the languages that do.
  • Private inheritance can be used for code reuse.
  • Private inheritance is very different from public inheritance. Public inheritance models ``is-A'' relationships, while private inheritance models ``is-implemented-in-terms-of'' relationships.
  • A user of a class that has privately inherited from something else cannot take advantage of this fact. To a user the private inheritance doesn't make any difference.
  • Private inheritance is in real-life used far less than it should be. In many situations where public inheritance is used, private inheritance should've been used.
  • Exception catching is polymorphic (i.e. dynamic binding works when catching.)
  • The polymorphism of exception catching allows us to create an arbitrarily fine-grained error reporting mechanism while still allowing users who want a coarse error reporting mechanism to use one (they'll just catch classes near the root of the exception class inheritance tree.)
  • Always implement binary operator+, operator-, operator* and operator/ as functions outside the classes, and always implement them in terms of the operator+=, operator-=, operator*= and operator/= members of the classes.

Exercises

  • Alter the file array such that it's possible to instantiate two (or more) kinds of FileArray in the same program, where the alternatives store the data in different formats. (hint, the alternatives will all need different traits class specialisations.)
  • What's the difference between using private inheritance of a base class, and using a member variable of that same class, for reusing code?
  • In which situations is it crucial which alternative you choose?

blog hints