Networking Primer – Part 3: Application, Presentation and Session Layers

Previous: Networking Primer – Part 2: Defining Networking with OSI and TCP/IP Suite

I’ve decided to group the top three layers together into one post. This is because these are more related to the data to be transmitted across the network, rather than the underlying transport mechanisms themselves. These three layers deal with the semantics of the communication, such as who the data will be sent to, the format of the data and the etiquette to be adhered too between the communicating nodes.

Lego Pirate Ship

The Pirate Ship: As with most technical concepts, analogies can help us understand the underpinning processes which are happening as part of the communication.  For this series, I’m going to use the following analogy: I work in an office in Manchester and I’d like to send a pirate ship made of Lego to a friend, Rich, who works in an office in London.  In our day to day lives, that’s a pretty simple concept that requires a couple of addresses and a postal service. Communicating data across a network can occur in much the same way. Let’s step through the network stack to see how.

Application Layer (OSI Layer 7)

The application layer is the piece of the puzzle that is going to sit closest to our end user application. It is worth mentioning here that when we are referring to the services in this layer we are not referring to the actual application being used by the end-user. To expand on this, an example application being used might be the AnyCo ERP solution. That ERP solution may provide the capability to send reports via “email”. So it’s actually the email service which fits into the application layer, not AnyCo ERP. AnyCo ERP would sit outside of the OSI stack in an upper out of scope layer.  Other application services you might find in the application layer, might be “File Transfer”, “Web Access” or “Network Management” services.

This layer is primarily responsible for determining suitable communication partner nodes and their identities. It  also responsible for ensuring that the relevant resources are available to send the transmission. It’s in this layer that the aforementioned X.400 protocol exists. Synchronisation of communication is also dealt with at this level.

The Pirate Ship: In our scenario, The Layer 7 service we want to use is Lego Sending. I have established that Rich is a suitable communication partner as he has advertised that he likes Lego and can accept that type of toy.  I’ve also established that the postal services have capacity and is suitable for sending the pirate ship.

 The Presentation Layer (OSI Layer 6)

Now that we have established a suitable place to send my data and that the relevant network resources are in place to do that. We need to look at what exactly we are going to send. The Presentation layer deals with the format of the data, it is there to abstract the meaning of the data as the application sees it into a standardised format that can be used by the underlying network layers. Where an application may be providing freeform text, the network needs a way of encoding it. An example of a protocol working at this level is XML.  Encryption may also happen at this level.

The Pirate Ship: Let’s think of the presentation of our pirate ship as a set of Lego bricks stuck together in a specific arrangement. The bricks are of standard sizes, colours and shapes. It’s those attributes that make up the format of the data.

The Session Layer (OSI Layer 5)

This is the layer responsible for setting up and tearing down the connection that will be used to transmit the data. It should be thought of as something that is more persistent than a single transmission of data. It is not responsible for actually sending the data. It simply executes the steps required to set up and maintain a connection. These steps might be simple requests for resources or handshakes between devices to be traversed. During a session, you might for example authenticate with a website and create a session. From there you may download lots of different files using the same session. For our purposes, we’ll keep it simple.

The Pirate Ship: I call my postal service to tell them I’m going to send a package to my friend. They verify my account number and then book slots for the package on all the the vehicles which will be traversed between my office and Rich’s.

Summary

So far, via analogy, we have established the objective of our application and the Layer 7 application service (Lego Sending). We have found the identities of our destination communication partner, ensure that the relevant resources are in place to send our lego and called the postal service to set-up the relevant connections to start the communication. Next well see what happens when we actually start sending the Lego.

Next: Networking Primer – Part 4: Transport Layer, TCP and UDP

Networking Primer – Part 2: Defining Networking with OSI and TCP/IP Suite

Previous: Networking Primer – Part 1: Introduction

What is a network?

This may sound like a very basic question, but I’ll assume the lowest common denominator here and define this briefly.

A network is set of two or more computing entities (nodes) that are configured to communicate with each other by passing information across an interconnecting media.

Ok, now we got that out of the way, we can talk about how exactly those nodes communicate with each other in a way that makes sense and achieves our objective of passing information between them.

A Little History

It is easy to imagine how the first baby steps of networking which occurred in the mid-20th century. As with all technology we start with the simplest goal and see how we can use what tools we available to use to achieve that goal. I’m not going to go into sending signals down telegraph wires, etc. but we can assume that the first step was to send a simple signal across a wire between to locally sited computers. Beyond this, more milestones were surpassed to enable us to send over greater distances and with more complex topologies. This resulted in the creation of a number of protocols, which specified things like how two nodes would set up a communication session and what format the data sent should be in. As technologies are researched, more often than not they diverge into multiple streams of activity with different camps developing different ideas on how to progress. This ultimately results in a bunch of disparate and non-compatible technologies.  Whereas in other areas this might be palatable, workable and have little impact.. clearly in networking this is not viable. The whole point of networking is that the entities involved can speak the same language.  There must be standardisation and the first and overarching grandfather of network standardised is the Open Systems Interconnection (OSI) model.

The OSI model

The OSI model is a set of specifications, rules, guidelines, instructions and protocols that describe how networking should work. It is important to understand how this is used today. As you might guess from the above description, the model is large, complex and in some ways all encompassing. Back in the early 80’s and 90’s many companies implemented technologies that strictly adhered to the OSI standards. In fact, I worked directly with one of those related to email messaging (the X.400 protocol suite) in a previous role.  The OSI specifications are quite complex and difficult to implement. It takes a lot of specialist knowledge and effort and as a result the detailed elements of the model were soon ditched in favour of more agile standards which could be delivered quickly with ease. For example, SMTP is now the defacto standard for email messaging and X.400 is only used in some specialist areas in the military and other area. (Read more about SMTP for military email here:  Command Email Whitepaper).

That being said, the OSI model is still widely used today. Although the detailed implementations have been ditched, the model is used at a conceptual level in day to day networking conversations. It breaks down the elements of network communication into seven logical layers, and by understanding these it is very easy for network engineers to gain a common frame of reference to quickly isolate the crux of an issue during a discussion.

The Seven OSI Layers

The seven layers of the OSI model are as follows:

OSI Stack

As we can see these are stacked one on top of another, which is why we commonly refer to the multiple layers as a “network stack”.  All of the nodes in a network will have similar stacks.  A common method to aide in remembering the (Application, Presentation, Session, Transport, Network, Data Link and Physical) sequence of the layers is to use a mnemonic. “All People Seem To Need Data Processing” is  good example of one of these, but there are many and you could make up your own.

OSITwoNode

During a network communication, we start at the top of the stack with application level semantics and gradually process down through the layers. Through each layer we use the mechanism of encapsulation, until we reach the physical layer, which is responsible for sending the actual bits and bytes from the source node across the interconnecting media to the destination node(s). At the destination node(s) the physical bits are then pushed up through the stack using decapsulation at each layer until the destination application or service receives it’s intended information.  This can be thought of like nested Russian dolls, with a different doll representing a different layer of the stack but instead of Russian dolls, it might be easier to visualise by thinking of a envelope, within an envelope, within an envelope and so on.  Well discuss these with a practical example in upcoming blogs.

The TCP/IP Suite

I’m not going to cover the TCP/IP suite in depth, but it is worth understanding what it is and how it relates to the OSI model.

TCP/IP Suite to OSI Mapping

As we saw above, although we still use the OSI model as a conceptual frame of reference. We no longer use it’s detailed implementation specifications. The TCP/IP suite was loosely developed as an alternative to the OSI model, with a view to creating a simplified four layer model and implementation mechanism. The TCP/IP suite contains a much smaller set of protocols and is actually used in the bulk of network implementations today.  We do however need to be careful in the use of our terminology. If we’re referring to layer 4 in a discussion, it is most likely that we are talking about the Transport Layer of the OSI stack and not the Application Layer of the TCP/IP suite.

Next:  Networking Primer – Part 3: Application, Presentation and Session Layers

Networking Primer – Part 1: Introduction

The world of networking has been fairly static for many years now. It’s been historically characterised by static infrastructures that require infrequent changes.  These configuration changes were performed via command line interfaces by network engineers, usually sitting with a laptop and a cable plugged directly into a piece of networking hardware. Activities were manual, repeated for every individual device and extremely error prone due to the non-human readable nature of network configuration information.

The workloads running in the modern datacenter have most definitely changed in recent years. It has become apparent that the capabilities of current networking devices and operational approaches simply cannot keep up with the pace of change.  In the modern datacenter, the rapid and overwhelming success of server virtualisation has fundamentally changed the way applications consume resources and the network has become somewhat of a bottleneck in providing agile, reliable and cost effective means of delivering new applications.   In addition to the shortcomings of existing technology, operational processes and a tendency to silo server, storage and networking departments has also become a major blocker for any significant progress in dealing with these challenges.

In the last 2-3 years, there has been industry recognition that these challenges need to be addressed and there has been a marked shift in strategy.  There has been a wide realisation that the boundaries need to break down and the silo’d teams need to converge into singular, collaborative and multi-skilled teams, delivering IT in a more integrated manner.  The technology also needed to change and the Software Defined Networking (SDN) movement is one that has been central to this shift.

Some time ago I worked in the military messaging field and have had wide exposure to networking, as it relates to battlefield communications protocols. The concepts and NATO protocols that underpin military messaging and not so different to those used in our datacenters and I have been working on understanding the datacenter networking space in the last 6 months or so. I’d like to share what I’ve learned and hopefully provide a reasonable learning resource for those administrators who are preparing themselves for the new converged infrastructure world.  I’ll be taking things right back to basics. Explaining at a beginner level what networking fundamentally is and working through to how we are addressing the key challenges that are being faced by organisations today.

Next: Networking Primer – Part 2: Defining Networking with OSI and TCP/IP Suite

 

Software Defined X – Automation: Job Stealer or Job Enabler?

I’ve had many conversations in recent weeks about the commoditization of the data center with many being concerned about the effect of the diminishing need for specialist hardware and greater automation through software. More specifically, how that might affect the job prospects of administrators and other technical roles in the modern IT environment.

We are in an era of rapid evolutionary change and this can be unsettling for many as change often is.  There seems to be a wide variety of reactions to these changes. At one end there is the complete denial and a desire to retain the status quo, with an expectation that these industry changes may never occur. In the middle, we have those that tip their hat in recognition of the general direction of the trends, but expect things to happen more gradually and then there are those that embrace it with an expectation of gaining some level of competitive advantage by being a first mover.  If there is one thing that is certain, if you find yourself at the wrong end of that spectrum, you will most definitely find yourself in difficulty.

No Change Around Here

The change is happening and happening more quickly than most expect.  The automation of data center operations and a focus on innovation is a key objective for most organisations at the moment. “Keeping the lights on” tasks are becoming less relevant in this world.

Casting Off the Shackles of Hardware

Development of custom hardware based intelligence is complex. This often involves the research and production of custom chipsets for these devices.  Due to the research, prototyping and production requirements of this type of operation.  We are usually working to a 2-3 year development and release cycle. In fact, most organisations have been used to using this kind of procurement cycle, executing a hardware refresh every 3-5 years.

This has worked historically, but today there are new kids on the block and they are eating the market with a new approach to developing and delivering services. Pioneers like Facebook, Google and Netflix have fundamentally changed how service delivery works. These operations have decoupled their software intelligence from hardware and deliver their services based on commodity inexpensive hardware. This not only reduces their capital outlay, it also provides them with a platform to rapidly deliver agile software services. In these types of environments, it is not uncommon to see software releases move from a 18-24 month cycle to a daily or weekly cycle. Strategically they can pivot at a moments notice and they can easily scale or contract operations at a very low-cost. As you might imagine, this kind of agility has become very challenging from a competitive stand point for companies like Microsoft who have had 3-4 year major release cycles baked into the fibre of their operational approach (e.g. Exchange, Windows Server, etc).

What About Automation?

The more we move towards software controlled infrastructures, the more easily they can be automated. Most solutions today are built with some kind of API (application programming interface) to enable other applications to programmatically control or manage them in someway. In this decade, the industry has moved firmly away from proprietary API technologies, towards standardised ones. More often not based on the RESTful API architecture. Alongside this we are starting to see the rise of DevOps tools such as Puppet and Chef, which help bridge the gap between IT operations and the developers actually creating the applications that organisations rely on.

So What Does This Mean For the Modern IT Professional?

As the development of these tools and API interoperability progresses, undoubtedly, IT operations roles will also have to evolve.  This does not mean that there will be fewer jobs in IT.  In fact, IT skills have become more relevant than ever, but those skills have to change a little.  It is time to start moving up the stack by putting more focus on innovation in the area of application and service, rather than keeping the lights on down in the bits and bytes of the infrastructure. By doing this, these industry changes should become a massive job and career enabler, not a cause of suspicion and concern for job security.

I had a chat with a family member this week which summed this up really well for me.  We were discussing the Luddites, a 19th century movement in my home region of the North of England. The Luddites, were a group of textile workers who protested against the mechanisation of the production of garments. They did this violently under the auspices of “those machines are taking our jobs, we’ll have nothing to do and we’ll all starve”. A couple of hundred years on, we can see that starvation didn’t happen and those same people survived by finding new ways to innovate. On a sidenote, I once received a letter from a CBE calling me a Luddite who had seen me on TV discussing an environmental issue. I found this most amusing given the industry I work in and my lust of technological progress. In the same conversation with the family member, I mentioned that I was looking forward to the introduction of robot-taxis (e.g. Self-driving Google Cars) due to the efficiencies and cost of car sharing. They replied “but that could be 20,000 taxi drivers losing their jobs in Manchester alone”. I replied “Yes, but that’s also 20,000 people who could alternatively be working on curing cancer, pioneering space travel or solving the world’s energy problems”.

Conclusion – Software Defined X – Automation: Job Stealer or Job Enabler?

For me I see it as a Job Enabler. My advice… embrace the change, relish the opportunity to innovate and change the world for the better.. one step at a time.