This is a featured article used with permission from the original author.

Please see the 'About the Author' section at the bottom of this page.

Turbo Fredriksson

Linux/Unix people


Lead Cloud Architect and veteran Debian Linux contributor


Atari 2600

Turbo Fredriksson


Old technologies and how to find them


This is the second article of mine, out of at least three.
More if I can come up with additional talking points.
I've decided on a theme of the series – what was, what is, and what will be.


The first article, “How has IT changed in the last forty years”, was about general technology, where it came from, and where I think it's going.
This one, is more specific.
It's something that everyone that have to deal with more than “a few” servers and “a few” users should have a rough insight into.
Read the first article here


Pros and cons

Old tech!?
Why!??
Using the latest-and-greatest of anything is the way to go..
Right!??

Yes.. And no.
Yes, in the sense that it's more fun, more interesting, and much more rewarding to work with the latest tech.
It really is!

However, there is one area where I strongly, vigorously, advocate being much more reserved.
Conservative if you like.
Only use the stable version(s).
The “Long Term Support” versions.
Which is usually four, five years old.
But they're stable, well tested, and secure!

They're guaranteed not to .. “mess things up”.
Changes and updates (mostly security updates) are far and few between, and extremely controlled to make sure it fix the problem in a non-interfering way.
It's both their downfall, and their strength.

Customers, the clients, may not care about the code, but they DO care about stability.
Or rather, they care not to have instability of their service(s)!

Hardware

I also very much like recycling, reuse!
I hate throwing things away that's perfectly usable.
I am somewhat of a hoarder, but I do keep it in check, mostly, which my garage is a good indicator of.
There's plenty of old, generationally old, computers and machines.
Most of them are still in use by me.

But I've never really been a hardware guy.
Not even in the old VIC20/C64/Amiga times, where hardware mods, soldering on the motherboard, etc, were necessary.
Not just needed, but actually required.
For one reason or another.

For the hobby user, or for those interested in improving or changing their career and learn other things, you can't always rely on your employer – YOU need to take responsibility for YOUR career.

That has always been true, and more so than ever!
If, or when, it is directly related to your job, then I'm absolutely advocating that the employer should pay for your training.
But what do you do if it's out of scope?

You get your own hardware, and do what you need to learn, whatever it is you want to learn.
Obviously, getting the latest-and-greatest is absolutely out of the question!
If you're that rich, you don't need training; just retire to a tropical island and be happy.

So old hardware is a nice touch.
Much of it, even if it's thirty years old, is more than enough to get you going.
It obviously won't perform very well, but that's not the point.
And a thirty-year-old hardware, they'll pay you to take it off their hands.

Software

I'm a software, systems, guy.
The complete picture, what it was running on, wasn't of interest, as long as it ran.

And of all that, one thing that has been closest to my heart for over twenty years, is authentication and authorization.
Often used interchangeably, but they're most definitely not the same!

Authentication answers the question “who are you”, and authorization answers the question “what do you want”.
For the older readers, you might recognise those two questions, for the younger crowd, it is the questions that The Shadows and The Vorlons asked to validate and trick races into doing their bidding.
Not very friendly, neither of them.

For those that need more, have a look at Babylon 5 – https://www.imdb.com/title/tt0105946/.
My favourite TV show from thirty years ago.
The CGI, thirty years old, can still hold a candle to the most modern CGI and tech!
AND, more interesting for me as a Commodore guy, it was made on a whole bunch of Amiga 2000s, with Video Toaster plugin cards, running LightWay 3D.
The absolute top-of-the-line at the time.
An absolutely horrendously expensive setup, but it produced some magnificent results!
One, that even after thirty years, isn't that much better.
In my opinion, anyway.

Distributed auth

This is the whole point.
Not just for this article, but for the company and organisation.
Distribution removes the single-point-of-failure, one that everyone (should!) be terrified of.
Downtime is never a good thing.

Old school

Anyway, in “the old days” which wasn't as good as my grandfather always said, there where no way to have a unified login.
As in, log in to one system, and have the same username and password everywhere.
As an admin, you had to sync those manually.
As in, making sure to copy the /etc/passwd and /etc/group everywhere.
Which was a pain, but remember, this was LOOOONG before Windows and graphical systems.
It was at the dawn of UNIX as well, which was there in the beginning – first there was ArpaNet (the predecessor of today's Internet – late 1969), then came UNIX not long after that (late 1971).

UNIX of that time, all of the '70s and a large part of the '80s, was mostly single-server setups.
You had one big server, thin clients – not quite computers, they had very little memory, very simple graphics, a monitor, and a keyboard.

No hard drive.
Everything was done on the server, including any graphics generated, which was then sent over the network more or less (note: simplification!) as a picture that could be shown on the local display.

OR, as in the very early beginnings, they didn't even have graphics (as we think of it today), only a shell login.
As in, telnet (or rlogin, which is (was) a simpler version of Telnet).
A very simple, but insecure, predecessor to SSH.
But they both work the exact same way, it's just that SSH is heavily encrypted (including the authentication, sending of username and password), which is the most important part.

The first step towards the future

When estates grew, more servers, more clients, and especially when the thin client “died” (because everyone wanted a “proper” computer on their desks) in the late eighties, early nineties, there became a very strong need for a distributed authentication system.
Authorization was still used locally, mostly.

So, NIS – Network Information System (or Yellow Pages, YP, as it was originally called) was created (Sun Microsystems, 1985).
You created a special database, kept it updated and changed, and it took care of the distribution.

You then had specialized authentication daemons (telnetd, rlogind, etc) with support for NIS.
So if/when you changed your password, this change was distributed automatically by the NIS system, and was available across the estate.

So if/when you changed authentication to NIS (or set up a [new] machine from scratch), you had to recompile, reinstall, and distribute these new clients and daemons with NIS support everywhere.

Like Telnet and Rlogin (two sides of the same coin, so to speak), NIS was completely insecure.
As in, not encrypted in any way!
So Sun improved upon it and created NIS+ (1992), which added, among other things, the much-needed encryption in transit.

But it was still quite a complicated process (a lot of shell scripting went on to make it easier, but there were still quite a few manual steps involved, and those are always a source of problems), difficult and cumbersome to administrate, so it never really became widely used.
Only those with large farms, and large user bases, used it.
It was that bad.

I worked quite a lot with NIS, but I only worked with NIS+ for a short time, because it was absolutely horrendous to work with!
MUCH more so than NIS!
It was much better just to write a script and distribute the passwd and group files around!!
THAT was how horrible it was!
We actually “downgraded” back to NIS, we accepted the risk of lack of encryption rather than the massive overhead in added administration.

Next steps

Both the old ways of storing usernames and passwords (and groups), the passwd and group files (and later the shadow files, where the password was stored in a separate file to improve security) were all row-based.
As in, one row in each of the four files (as well as in NIS and NIS+).
So to add a user, you (the admin) had to edit several files, one row at a time – one row, one record.

However, when the X.500 protocol (1988) was created, it provided a way to organise information in a better way.
It was an object-oriented, tree-structured, directory database.
From that grew LDAP (1993) - Lightweight Directory Access Protocol - which was a lightweight version of X.500.

It (neither X.500 nor LDAP) was never meant to be used as authentication or authorization, but at some point, someone realised it would be a perfect fit for that.
When the OpenLDAP open-source project (1998) was created, it offered this from the start.
It was designed almost exclusively to be used for a distributed authentication (not authorization at first!) system to replace NIS/NIS+.

Object oriented and distributed

With a better system to keep users, their passwords, information about their user and group IDs (uid and gid), home directory, phone number(s), shoe size and what ever else you could think of, organised into one object, this was much easier to distribute to other servers and clients.

With the built-in synchronization protocol (syncd), this could be done securely and effectively.

But the authorization part still left a bit to be desired.
The data on the disk wasn't encrypted, and the communication between the login client (telnetd, sshd, or whatever else used) wasn't encrypted either.
The only encryption was in whatever communication was used to connect to the server(s) - remember, Telnet was never encrypted, but SSH was!

Safe and secure

Kerberos (1998) was designed and created with the sole purpose of providing secure authentication on insecure networks.
Which is, in practice today, everywhere.

It did that in several ways.
I'm not going to go into, there are several really nice books on the subject.
My favourite is “Kerberos: A Network Authentication System” by Brian Tung.
But I can also highly recommend the O'Reilly books on the subject - “Kerberos: The Definitive Guide” by Jason Garman and “LDAP: System Administration” by Gerald Carter.

All three are books that I read from cover to cover more than once!
And that I still, twenty years (or so) later, still take down from the bookshelf and read!

The perfect combo?

Combining LDAP and Kerberos (my favourite implementations are OpenLDAP and MIT Kerberos, but there are other sources, both closed and open source) is basically what Microsoft calls AD – Active Directory.
It's the core of that service.
There's more to AD now; it now also has a DNS – Domain Name System - but that was how it started, secure authentication and good authorization, as well as good and simple administration.

Where to go from there

When a stable, secure, distributed, and easily manageable system is in place, where do you go from there?
Well, the first step is obviously to make your login system talk to the LDAP/Kerberos system.

In today's Linux (and modern UNIX and UNIX-like systems, such as BSD etc) world, we use PAM – Pluggable Authentication Modules (late 1995, also by Sun!), which makes it easy and modular (the clue is in the name to set up how authentication and authorization should be done.

Remote Home Directory

Just having a username and password on a remote server isn't enough.
In the UNIX/Linux (as well as Windows and MacOS!), having the same home directory everywhere is important.
As well as shared directories for the team and/or company wide.

For a remote (as in, not local to the local machine) home directory, NFS (1984) and later NFSv4 (2003) – Network File System - was always (well, since the mid/late eighties anyway :) been the go-to network file system in the UNIX world.

NFS was never encrypted, which is why NFSv4 was created.
It also provided additional authorization, namely Kerberos authentication.

However, both of them have one very big flaw.
They're not distributed!
They're located on one server, so if that server crashes for whatever reason, there's hell to pay :(
Although clever admins usually built several small(er) file servers, and distributed the users over these, to minimise loss and inconvenience for data and users in case one server crashed or died.

System integration

Many companies, large and small, use Linux on the server(s) with Windows clients.
It's not just for price, a Linux server running open-source software is cheaper, even in the long run.
But users need Windows software, some software is only available for Windows.
AND it is, the go-to operating system for the client.
With a lot of MacOS thrown in, with a small sprinkle of Linux clients :).

But the network filesystem on Windows is the SMB protocol (IBM, 1983).
In UNIX/Linux, we can provide that with Samba (late 1993).
However, it never really worked (well or properly) within a Windows Domain – (note: oversimplification!) next generation AD.

However, it was mostly rewritten, almost from scratch, with Samba v4 (started development in 2003, first tech preview in early 2006, officially released in late 2012 – this indicates how huge the rewrite was, almost ten years!), It provided its own Windows Domain Controller.

The same limitation as Windows' original SMB, as well as NFS and NFSv4, still remain, though.
It is not distributed.

Distributed Home Directory

But there is a network filesystem that is not talked about, and I have met very few people that's even heard about it, which is a shame, because it's quite brilliant :).
In my humble opinion, anyway.

And that is AFS - Andrew File System (Carnegie Mellon University in collaboration with IBM, 1986).
It is a distributed, secure (it uses Kerberos for authentication and authorization), and it is cached (so quicker) on the client's network filesystem.

Distributed in this case, as in LDAP/Kerberos, means that “the data” exists in multiple copies, on multiple servers across the network, whether that is a local network or a WAN (Wide Area Network), and AFS distributes the file data across multiple servers.

You can decide what, how, and where things should go – extra sensitive data is duplicated onto multiple servers, while less sensitive onto fewer servers.
Or older servers.

AFS clients are available for all operating systems, so there's little excuse not to have a fully distributed environment.
NFS has the problem that it uses UDP for the connection.
This means that it can handle “a little” downtime or lag in the network without much problem, but if the network gets completely disconnected (or if the file server gets rebooted!), the home directory (which is the most common use for NFS) will “hang”, and the only way is to reboot the client.

AFS, on the other hand, is connectionless.
Everything is cached locally (so you need reasonable storage), and synced up to the server(s) regularly.
This is also a disadvantage, because if the client dies (as in, a hard drive crash which can't be recovered), you'll lose the data.

But this is true anywhere, even NFS.
If the file server [drive(s)] crashes before a backup, then the data goes bye-bye.

There are other distributed network filesystems around, one big and notable one is GlusterFS (2005).
Which I've only worked with out in the periphery, but didn't like for various reasons, one of the bigger ones is that it tries to solve too many non-problems.
In my opinion (!!).

Final words

Of all the technologies that I've mentioned in this article, almost all of them were (originally) created in the mid/late eighties!
That's forty years ago!
Some of them have lived on, and are still thriving (such as LDAP and Kerberos in Active Directory).
Some of them have gone to the source repo in the sky (or wherever old software goes when they die.

So, using old tech isn't a step backward, it can be several steps forward!
It's just a matter of finding a new, different way to use them.

An article of mine wouldn't be fully complete if I didn't hint at my own book, “Implementing LDAPv3: OpenLDAP, Kerberos V and glue code for distributed data” (available on Amazon :), where I basically talk about all these, and more.
And how to get them compiled, installed, configured and distributed.









About the Author


Turbo Fredriksson - Debian Developer


Turbo Fredriksson

Turbo Fredriksson


For LinkedIn Profile Click here

I have 35+ years of working in IT, from starting out as a developer and then becoming both system and network administrator.
Then as an architect, designer and managing large scale compute clusters and networks with thousands of nodes and computers and tens of thousands of cores.
The last 8 years I've been working with both the private cloud (VMWare and OpenStack) as well as the public cloud (mainly AWS, but also some Azure and GCP).

Debian Developer Jan 1997 - Present · 28 yrs

ZFS On Linux Developer

Programming Languages
Infrastructure
Docker Products
Microservices
Linux Firewalls
AWS Web Application Firewalls (WAF)
Amazon SQS
Amazon Simple Notification Service (SNS)
Amazon Route 53
AWS Identity and Access Management (AWS IAM)
Amazon Elastic File System (EFS)
Amazon ElastiCache
Amazon Dynamodb
AWS Directory Services
AWS CodeDeploy
Amazon Cognito
Cloud9 IDE
AWS Workspaces
Internet Security
OpenStack
Linux System Administration
Puppet (Software)
AWS Backup
AWS APIGateway
Elasticsearch
Amazon EC2
Amazon Relational Database Service (RDS)
Python (Programming Language)
Kubernetes
Amazon CloudWatch
Microsoft Azure
Bash
Jenkins
Google Cloud Platform (GCP)
Amazon S3
AWS CloudFormation
Software as a Service (SaaS)
AWS CodePipeline
AWS CodeBuild
Amazon Elastic Container Registry (ECR)
Amazon Fargate
Amazon EventBridge
Amazon ECS
Amazon EKS
AWS Lambda
DevOps
AWS Command Line Interface (CLI)
Terraform
Amazon Web Services (AWS)
Linux
Perl
Open Source
Unix
Apache
Shell Scripting
C
MySQL
Integration
Software Development
Telecommunications
JavaScript
HTML
CSS
Analysis
Security
Network Security
Firewalls
Cisco
Web Services
PostgreSQL
VMware
VPN
Virtualization
Computer Security
Active Directory
Solaris
TCP/IP
Switches
CVS
Vulnerability Assessment
Cloud Computing
System Administration
Disaster Recovery
git
Subversion
VirtualBox
Mac OS X
Routers
Router Configuration
Route Planning
Wireless Networking
Wireless Security
Qmail
Bind
Bacula
LDAP
OpenLDAP
Kerberos
WAN


Causes
Animal Welfare
Civil Rights and Social Action
Environment
Human Rights

https://github.com/FransUrbo


See you in the next one!


If you wish to support our project

Donation link (Buy me a coffee):

https://buymeacoffee.com/Alex_Cyber_Synapse