Configuration, Asset, Patch, and Package Management

In the real world, you seldom if ever set up a server (or workstation) in isolation.  Inevitably, there are many services involved and these are usually hosted on different servers: a web server farm, DHCP, printing, LDAP (single sign-on), DNS, NFS, Samba, NIS, etc.

In this situation, there can be many dependencies between servers, so that changing the configuration of any one of them may impact others.  Here are a few ways these dependencies can hurt the unwary:

·       Decommissioning a server that may have been the DHCP server for some small group within your organization; those affected hosts will fail to reboot.

·       Can the new versions of applications and patches be installed on one machine from another?  What if one host has higher security requirements than the host the software is coming from?

·       How can you be certain every host in a cluster/grid is running the required (new) version of some library?

·       How can you be certain all hosts has the current anti-virus signatures installed?

These problems can become especially acute in a de-centralized environment, where some “uppity” SA over-rides the standard configuration with local changes.

Every organization has information stored about its IT infrastructure, and this information must be kept up-to-date as changes are made to any configurable item (or CI), and consulted for dependency information when planning any changes.

Every industry has a slightly different definition of configuration management.  The ISO definition (ISO 10007:2003, Quality management systems - Guidelines for configuration management) is typically obscure: “Configuration management is a management activity that applies technical and administrative direction over the life cycle of a product, its configuration items, and related product configuration information.”  What that really means is keeping track of all the tasks and settings needed to bring a “bare metal” computer with just a running, default OS, to a fully operational state.

Configuration management (“CM”) is the task of juggling the configuration of all servers, routers, switches, firewalls, PCs, etc., and all application configurations on them.

The SA must check that changes to any configuration information (any “CIs”) have been recorded correctly, including any dependencies that may have changed, and continuously monitor the status of all IT components.  CM is sometimes (and I think incorrectly) called asset management or other names.

CM starts with policy.  If a system doesn’t behave as it should (according to policy), then you have a problem that needs to be detected, understood, and fixed.

[Wikipedia on ERP, 3/09]  Enterprise resource planning (ERP) is an enterprise-wide information system designed to coordinate all the resources, information, and activities needed to complete business processes.  An ERP system is based on a common database and a modular software design.  The common database can allow every department of a business to store and retrieve information in real-time.  The information should be reliable, accessible, and easily shared.  The modular software design should mean a business could select the modules they need, mix and match modules from different vendors, or add new modules of their own.

Using standard ERP modules that implement “best practices” for CM can reduce risk and ease compliance with regulations such as IFRS (International Financial Reporting Standards), Sarbanes-Oxley and Basel II.

CM includes patching systems and applications, throughout the enterprise.  In larger organizations, this requires some help (including policies, tools, and servers) or the task is impossible.  One part of CM is Software CM or SCM, which typically involves a revision (or version) control system such as RCS or CVS (discussed later).  CM is related to change management (discussed below).

A number of standards and guides for CM/SCM are available including: IEEE Std. 1042-1987 (Guide to Software Configuration Management), MIL-HDBK-61A 7 February 2001 (Configuration Management Guidance), ISO-10007 Quality management (Guidelines for Configuration Management), and others.

There are often legal requirements for doing some sort of CM including compliance with various regulations (SOX, HIPPA, etc.).  The penalties for management may be severe; for the SA you may find yourself out of a job.

The phrase configuration management is sometimes used to refer to the hardware selections made when upgrading or purchasing new systems.  This is also called provisioningAn SA must keep an inventory of all hardware, including where it is, make/model, when and where purchased, serial numbers, and support contract information.  Consider what happened Fall 2008 in the DM lab, when the wrong hardware was purchased and CTS-2301C students couldn’t use their disk drives in the lab computers!

The basic choices to be made when provisioning a new server include: the type of enclosure (rack mount or enclosure, and the size and type), the power and cooling requirements, noise and vibration damping, the type of motherboard and bus, the number of CPUs (or cores), the amount of RAM, drives (disks, CDs, DVDs, if any), and the I/O (console ports, printer ports, network) needed.

Even using just a single vendor, the possible configurations of a single server are staggering, often more than 10,000.  And not every combination will work!  Most vendors have (and some provide to customers) a configuration or provisioning tool they use, to make a workable configuration that will meet your needs.  Such tools need frequent updating!

When planning or upgrading SOHO (i.e., small to medium organizations), money is often a critical factor.  While new hires rarely have to design large business infrastructures, it is not uncommon for new hires at SOHOs to have to “fix” a faulty infrastructure, or grow a SOHO into a mid-size one.  There are some general guidelines you can follow:

·       Use commodity hardware for client-side systems.  Keep spare parts handy.  Keep data (such as user’s home directories) on a server.  This will keep recovery time as short as possible in the event of a PC failure.  Client PC failures rarely cause highly visible incidents (the exception being the CEO’s desktop).  Maintain an up to date disk image file, in case you need to install a new PC quickly.

·       Running cables is expensive, but generally they charge per drop and not per foot.  So it is often cheap to pull double cables which has potential advantages.  If one fails you have a spare.  If you need more bandwidth, you can use bonding.  If another computer is installed in that office, you have the outlet for it.

·       In a medium to large organization, you may have several servers, firewalls, network monitors, and routers.  If the network fails you lose everything, so at least two “core” switches are used for redundancy, even though these are very expensive.  If available, these switches should have redundant power supplies (or darn good UPSes).  Core switches require high speed so use “L3” (or “multi-layer”) switching.  This means they are working as (a pair of redundant) routers.  They should use the Virtual Router Redundancy Protocol (VRRP) or Hot Standby Router Protocol (HSRP).

These two protocols cause routers and core switches to share a virtual IP that your host systems use as their default gateway.  When the primary one goes down, the other very quickly begins responding to the virtual gateway IP, ensuring that your hosts are not aware of the failure.

·       Depending on the size of the organization you may only need a single “access” switch to connect all of the client PCs to the core switches (or to the one and only router).  More commonly, you’ll need a few if the original one had only 4 or 8 ports and you need more, or if the organization is spread out over an area too large to cover with a single data link (LAN).  A 1U fixed configuration switch is good enough for an access switch, as long as it contains at least two ports of the correct speed for uplinks to each of the core switches.  While these don’t need redundant power supplies like the core switches, you will need an “enterprise grade” switch that includes features such as VLAN management, SPAN port and other management features.

·       If you organization larger enough to host its own services (database, web, mail, ...) then redundancy is going to be important.  You should use a cluster or a simple hardware a load balancer.  This is probably too much for a SOHO; a decent hardware load balancer will cost between $4K and $10K, the same as for a high quality server.  Another solution is to outsource backup servers for vital servers.  For example, you can pay a small monthly fee to some datacenter and they will be your primary email server, forwarding email to your internal mail server.  They have high uptime SLAs, and usually offer spam and virus filtering.  (This is called mail bagging.)

The most basic CM method is to keep a system journal of all your steps, so you can reproduce them.  A step-up from that is to keep your notes on a wiki.  A major improvement is to automate some or all of the manual steps.  That can be done by finding simple tools (including GUI ones) that do most of the work for you.  Usually however, even the simplest IT infrastructure will need custom steps, and no existing tool can automate all your processes.  Instead, scripts (shell, Perl, Python, Ruby, or whatever) are used to run a series of non-GUI tools to complete processes such as adding a new customer, employee, or deploying a new web (or other) server.

The above discussion didn’t mention monitoring.  You don’t do CM once then forget it.  You bring systems in line with your policy, audit the systems to make sure, and monitor them over time, correcting any problems that inevitably crop up.

Most SAs use some collection of tools to manage some part of system configuration: webmin, command-line tools, central software repositories, etc.  However, there is no unified approach taken in the usual case, to configure a network of hosts.  Starting with:

·       a collection of different hosts,

·       a repository of all needed software packages, OS versions, and data files,

·       a specification of the functions the system as a whole is intended to perform.

The systems configuration tasks are:

·       Initialize the hosts by loading the correct OS, software, and data, and then configuring the OS and software appropriately.  (This is sometimes called the bootstrapping service.)

·       Reconfigure hosts whenever the system specification changes.

·       Reconfigure hosts to maintain conformance with the specification, whenever the environment changes (e.g., when some server breaks down).

The most advanced CM methods use a special tool that lets the SA define the policy in a declarative, system-agnostic way, then automatically apply that policy to specified hosts.  The SA defines the system specification, the location of the hosts, repositories, etc., and lets the tool do the rest.  Those hosts are monitored, and any problems (that is, Apache web server is supposed to be running but it isn’t) are corrected by the tool automatically.  Such tools include Cfengine (oldest), AutomateIT, Bcfg2, Puppet, and Chef.  (There are many other such tools, but these are the most commonly used, and are well supported and maintained.)

Using such a tool, you can bring up a new host from bare metal easily, reliably, and quickly.  For example, you can specify “package: apache; action: running; firewall-rules: allow TCP/80” as part of a web server policy.  The tool will use apt-get, pkg, yum, or whatever the OS needs, to make that happen.  (At worst, the policy will also need a distro-specific package name.)  Should the web service be turned off (say, because some required library file was upgraded to an incompatible version), the tool will reinstall the service if possible, automatically.  If the tool doesn’t know how to perform some task on some distro, you would probably have to spell that out in the configuration file(s).  But that is no more work than if you had to write a shell script for the task.

If you have a backup of the CM tool policy file(s) and a storm destroys your company’s data center, you can sign up for Amazon cloud service, upload your configuration, then restore the customer and other databases, and finally change your DNS to point to the new cloud servers.  The whole process might take hours instead of weeks.

Other tools are commonly used, but only help with some of the tasks.  Such tools include using databases (referred to as the configuration management database or CMDB), which should not be confused with asset management.  (Qu: What kind of queries do you think would be useful?)  Other tools include a repository of software (sometimes called a software depo service) and patches, a variety of shell scripts, open source, and commercial tools.  These are all used to monitor the system and apply updates, and revision control systems.

One popular FLOSS tool is xCAT (Extreme Cloud Administration Toolkit), originally from IBM before it was put into open source.  It can be used to manage and provision thousands of hosts in a data center.

Other issues to consider are that updates and reconfigurations are made on a production network, so [Limoncelli & Hogan]:

·       It is not okay to flood the network with updates all at once;

·       No software update should require physical access to the host;

·       No update should crash the host, as it may have live users;

·       Hosts may not have the previous configuration assumed (the user or some SA may have changed it) so any update must carefully check;

·       Remember, dual-booted PCs may be currently running a different OS than the update is for.

SAP is an acronym for “System Application & Products”.  It is also a commonly used brand of CM/ERP (Enterprise Resource Planning) software, which creates a common centralized database for all the applications running in an organization.  Today, major companies including Microsoft and IBM use SAP to run their own businesses.  Sap’s suite of applications provides functionality used to manage product/service operations, cost accounting, assets, materials, and personnel.  (There is some good FLOSS ERP software from OpenERPTine also supports some ERP; it is a groupware application suite that supports CRM and HR ERP modules.

Configuration Management for Mobile Devices

Mobile Device Management (MDM) is another form of CM.  Three Laws of Mobility (3LM) for Android, and BoxTone (for everything else) are two examples.  In March of 2012, Apple released its CM tool for iOS devices, Configurator.  3LM provides encryption of all data on the device (including SD cards), password policy enforcement via LDAP or AD, application whitelisting and blacklisting, and full device or selective remote wiping of data if the device is lost.  Applications can be managed remotely and pushed to or remotely deleted from the phone.  Administrators can lock required applications down to prevent the user from removing them.  The system can also be used to “breadcrumb” the phone, tracking its movements to locate it if it is lost (or check to see if the user is really out on that sales call).  3LM also provides built-in VPN access to corporate data and applications; the activity of each handset on the network can be tracked by its unique IP address on the VPN.  []

SaaS (Software as a Service) and Web Services

Both consumers and businesses today depend on third-party services such as Facebook, Twitter, Gmail, and Amazon.  In the corporate world, there are also custom line-of-business applications and software-as-a-service (SaaS) applications, also known as cloud applications, such as Google Apps and Office 365.  Such services typically roll-out updates on a rapid release schedule (a few weeks, or just days (or even hours) between flash-cut roll-out of the new version), which (some) consumers favor.  However, this approach means businesses don’t have the traditional options of testing and rolling out the change on their own schedule.  There is often no notice given, and some of these services (such as Facebook) don’t even have version numbers.  Besides the obvious problems of security and failure, help desk staff who were trained on the old version won’t know about changes.

Companies want to use SaaS providers for the reduced administrative burden and convenience.  However, this exposes them to less control over the software they use, and to an update policy they no longer control at all.  Organizations want to use Web browsers as a portal to access such applications, but Web browsers, with their considerable consumer focus, have increasingly consumer-oriented upgrade policies.

The approaches taken to upgrades and feature roll-outs by SaaS providers do vary, both in terms of notice and flexibility.  Zoho for example offers a variety of Web-delivered applications including CRM (Customer Relations Management), desktop productivity, bug-tracking, mail, and wikis, to more than five million users.  They provide advance notice prior to any substantial updates, and early access to those updates so that its subscribers can test and validate them.  During this period, subscribers can switch between the old and new versions.

Google has two tracks for Google Apps: a Rapid Release track and a Scheduled Release track.  On the Rapid Release track, features are available as soon as they pass Google’s quality assurance processes.  On the Scheduled Release track, new features are announced and delivered only on Tuesdays.  Each Tuesday, the company decides if any Rapid Release features are ready for the Scheduled track, and if so, it announces them.  The following Tuesday they are then rolled out, so at least one week passes between the Rapid and Scheduled releases, and potentially more if problems are found after the Rapid release.

Google Apps customers can switch tracks at any time, but it’s an all or nothing proposition; if a small group of users wishes to test the Rapid Release track to get an early look at a scheduled feature, they must do so using a different account.  This two-track approach only covers a handful of Google’s services: Mail, Calendar, Contacts, Docs, and Sites.  Everything else is rapid release only.

Microsoft has yet another approach.  The company states that “major service updates are rolled out to the service as they become available”, with no option to delay or defer the upgrade.  However, administrators can control the availability of new functionality through the administrative dashboard, which allows some ability to test updates and “prepare their users appropriately” before rolling out new features.

When your organization depends on some external service, you can only support what they will support.  For example, Google supports only the two most recent versions of any browser.  So when Internet Explorer 10 is released in 2012, one side-effect is that any company still using Windows XP will no longer be able to use Internet Explorer to access Google Apps; neither Internet Explorer 9 or 10 will run on Windows XP.

Mozilla has recognized that its new rapid release policy has not been welcomed with open arms by IT departments.  Past attempts by Mozilla to undertake corporate outreach have not been particularly successful; the original Enterprise Working Group had three meetings in 2007 before apparently giving up.  Extended Support Releases are available as of 2012.

So what can an IT department do?  One option is to abandon the browser as a platform for internal applications.  While this runs contrary to the trends of the last decade, for organizations with strict testing and validation requirements, it must be an option.  Operating systems and application frameworks such as Microsoft’s Silverlight and Adobe’s AIR are in many ways much more stable targets for applications, and just as useful.  But it will take time and money to create and switch to such applications.

The simplest option is to avoid the issue entirely; freeze your software versions and retain full control, by avoiding SaaS services with inappropriate update policies.  A more robust long-term solution is to have your software not target any particular browser version, but instead target the relevant Web standards, and test your software with a range of browsers.  Organizations may increasingly turn to non-web browser software that uses Siverlight or AIR or something new.

Finally, there are software products that allow you to run multiple versions of web browsers, and to redirect various URLs to one or the other.  Microsoft’s MED-V does this by using a virtual WinXP machine to host IE8.  Browsium’s Unibrows software does this without virtualization.  Instead, Unibrows creates a compatibility environment that allows the legacy browsers to run directly within Windows 7.

Asset Management

This is the work done in tracking all hardware, including servers, routers, PCs, laptops, tablets, smart phones, etc.  This information is kept in a database, sometime just a piece of paper or spreadsheet.  However, keeping the data in a RDBMS has many advantages.  You can just use SQL to quickly answer questions about all equipment of a given vendor, all routers whose support contracts expire within 3 months, all switches connected to the same LAN, and so on.

Periodically (usually each year) an asset audit is performed, walking through every location to make sure all equipment is where it should be.  (HCC does this, requiring faculty to bring in any portable equipment and leave it in their offices overnight.  Accompanied by a high-ranking administrator, a sys admin scans every bit of equipment with a portable bar code scanner.)

For each IT asset you need to track, you need to keep lots of information, including these (from a list found at

Description, hostname, department or person assigned to, type (router, switch, server, ...), manufacturer, model, status (in use, in warehouse),  serial # (assigned by vendor), asset tag # (assigned locally), location (including rack/shelf position), IP addresses, switch port connected to, monitored (say with Nagios), OS/firmware detail (including version), warranty start and end date, type of warranty (on-site support, 24 hour support, etc.), warranty service contact info, date of purchase/lease (and lease expiration), price and terms (monthly payment, buy-out/trade-in), notes (e.g., web server URL, purpose of equipment, ...  The purpose can be related to Nagios type values).

Patch Management (adapted from an ACM Queue article from March 2005)

Patches are applied to applications and to the OS, usually to address flaws but sometimes to add features or to support newer hardware.  Usually new features may require installing additional packages.  While Gnu (and most Linux systems) have no issue with releasing a new version of a package, a more stable OS such as Solaris only changes package versions between OS versions.  All changes to existing code are delivered by patches only.

These can be critical to apply, e.g., security patches.  Software vendors have a responsibility to produce such critical patches in a timely manner.  Sometimes a flaw just discovered has existed for some time, and it may be necessary to have different patches for each supported version of the software.  Patches are sometimes called updates or upgrades.

Ideally applying a patch will not affect any system users.  Even simple code flaws may not be easy to fix, however.  The WebDAV issue fixed in Microsoft Security Bulletin MS03-007 was an example.  While the exploit happened in WebDAV, the actual problem occurred in a kernel function used by more than 6,000 other components in the operating system.  A simple code flaw is no longer easy to fix when you have 6,000 callers.  Some of those callers may actually be relying on what you have now determined to be flawed behavior and may have problems if you change it.  (This sometimes leads to patches for the patches!)

It is important for the vendor to ensure that one patch doesn’t undo the good work of some prior patch.  Also, some patches may depend on earlier ones.  (To avoid that problem, MS issues cumulative patches, so if you use the latest one you get all prior ones too.)

Patches may be:

Incremental, in which a patch depends on all previous patches (that is, patch to version 5.21 is must be applied to version 5.20),

Cumulative, in which a patch has no dependencies (that is, patch for version 5.21 contains the previous 20 patches too; you just install this patch to 5.any-version to get version 5.21), or

Differential, in which patches are applied to the original version (that is, the patch for version 5.21 must be applied to version 5.0, not 5.20).

Cumulative patches can get very large over time.  Every so often, Microsoft issues a service pack, which is just a normal cumulative patch.  However, future patches are cumulative only back to that service pack release.  The Linux kernel issues both incremental and differential patches.

Patches must be easily deployable.  In some cases, a patch is to the source code of some application or operating system.  This is often the case with open source software such as the Linux kernel.  Such a patch is actually a file containing the output of the diff command, and can be used to update the old version of the source code to the new version using the patch command.  Then the code must be re-compiled and re-installed.  This can be quite complex to do!

For proprietary software the source code is not available, and the only choices are to ship the patch as a binary diff that modifies the .exe and other files to the new version, or to ship a completely updated version of the software, which must then be installed.  (rdiff is useful to create binary diffs.  RH is package librsync.)

Most systems include some sort of patch management system, or software package system, or both.  Examples of patch management systems include MS Windows Update, Apple Software Update Service, and Solaris Sun Connection.  Examples of package systems include Windows Installer, and Debian, Solaris, Slackware, and Red Hat package management systems.

Most package and patch management systems can run pre- and post- install scripts with the package/patch.  Small configuration changes may be made by using the patch management system.  A package/patch is created that just runs some scripts without actually installing anything else.  When the patch is deployed, the scripts run and can change configuration files.

Patches should use a naming scheme that identifies the software being patched (including its version number, the software’s new revision number (once the patch is successfully applied), and the architecture (e.g., PC, IA-64, etc.) and locale (e.g., English or Japanese version).  For Solaris, patches are identified by a patch identification number (patch ID).  This patch ID consists of a six-digit base identifier and a two-digit revision number of the form xxxxxx-yy.

Patches should be single files that contain a description of the problem being addressed by the patch, details of the software and version the patch is for, what prior patches if any this patch depends on, and any other instructions or information users of the software may need (e.g., “You will need to reboot after installing this patch”, or “You must change your configuration after installing this patch”).

Patches ideally support an un-install, or rollback option.  Patches should also add an entry to a log file for patches.

Patches must be protected from tampering, and support verification.  This is typically done with digital signatures, or at least with a checksum/digest/hash.  Patches can be made available with a secure web server, or some other TLS-protected service.

Some patches may not come from the vendor.  This can happen with an enhancement patch, or if the vendor is slow in supporting its customers.  You must be very careful with such third-party patches!  Often installing one voids your support contract with the vendor.  At the very least it can adversely affect future patches from the vendor.

Patches must be deployed, and customers informed when new patches are available.  This can be done automatically but that is rarely a good option for servers.  Often an email notice is sent (today RSS can also be used).

Most vendors release patches on a regular schedule.  Doing so allows sys admins to schedule maintenance windows.  For example, Microsoft releases patches on the second Tuesday of every month (“Patch Tuesday”).  Cisco releases patches twice a year.  (Of course, if there is an emergency such as a security vulnerability known to be exploited already, a responsible vendor won’t wait and will release a patch immediately.)

Notifications should include detailed information on the problem, affected components and their versions, instructions and information about installing the patch and any changes required by the patch, where to download the patch from, and possible work-arounds for the problem in case the patch can’t be applied immediately.

The notification should also describe the severity and urgency of the patch (e.g., a security patch with major implications, but with no known exploits, versus a patch with minor implications but with current exploits known).

If you have an urgent patch, it may be necessary to skip some compatibility testing and deploy it quicker.

Once you have been notified of the availability of a patch, you need to identify all hosts that need the patch.  This includes servers, workstations, and don’t forget remote users (e.g., those who work from home or use laptops or notebook computers.)  Note some servers run multiple instances of some software, so you need to remember to patch each instance.

Once you have downloaded (and verified) the patch, you next need to test it, to make sure it is compatible with your critical software.  Installing a security patch is useless if it crashes you mission-critical applications!

If installing a patch requires down-time, you may have to delay installing it on production servers until a maintenance window is available.  Installing patches on clusters and grids usually requires special planning and support.

To ensure your organization is fully protected it is important to monitor patch compliance.  You may need to decide to isolate un-patched hosts until they are patched.  Other aspects of patching should be monitored as well, for management reporting purposes: patch download and install rates, reliability, and whether or not required reboots/restarts have been done after patch installs.

Patch management policies for patch deployment can be one of three types.  You will probably use a combination of these, for different types of patches/updates:

pull policy —     The SA updates the repository with the vendor-supplied, verified, and tested patch, along with instructions.  The end hosts check and apply the patches.

push policy —   The SA updates the repository with the vendor-supplied, verified, and tested patch, along with instructions, and alerts the end hosts to update themselves ASAP. (RSS can be used.)

force policy —  The SA updates the repository, and starts a remote update procedure.  For disconnected and currently off-line hosts, the boot-up/login/reconnect procedure is modified to do the update as soon as the host next boots/logs in/reconnects.

The SA must decide which patches to apply, and when.  There are four strategies for this:

proactive       The main goal of proactive patching is to prevent unplanned downtime or, in other words, problem prevention.  These are applied to a working system during a scheduled maintenance window, and need to be tested first.

reactive         Reactive patching occurs in response to an issue that is currently affecting the running system and that needs immediate relief.

security —         Security related patches are proactive and yet they need to be installed before the next scheduled maintenance window.  An SA must keep informed about the availability of all patches but especially security related ones.  Subscribe to alerting services by the vendor or security organizations (e.g., or

new OS —         During an initial system install all relevant patches should be applied before the first boot (or as soon as possible thereafter).

Applying patches or replacing packages can take a long time (in some cases, hours).  If the kernel is updated, or certain other parts of the system such as the glibc or libc DLL, SE Linux policy, or modules on a sealed kernel, a reboot is generally needed.  Worse, upgrading DLLs, configuration files and other data can cause running processes to become unstable.  It is therefore often best to upgrade on an off-line system, then reboot it.  This results in long down times, not a problem for home users or hosts in a cluster.  Otherwise you can use Solaris live update or equivalent virtualization techniques for non-Solaris; this updates an-off-line clone of the system, then a quick reboot on the clone reduces down-time significantly.

Solaris Patches (this material applies to Solaris 10 and older versions only)

Unix vendors such as Sun don’t generally distribute source code for the kernel, drivers, or utilities.  So when updates are needed you either replace the software (and “update”) or patch it.  These patches are unlike Linux patches, which mainly update source files.  Solaris patches are designed to update binary files.

To see which patches have been applied, use showrev or uname -a.

A Solaris pre-11 patch file contains one or more (SVR4) sparse packages, delivering binaries that accommodate new bug fixes and/or new features.  Patches are for either source code (common for open source) or binary (common for proprietary code, such as a Unix kernel).

Each patch is identified by a patch identification number (patch ID).  The patch ID consists of a six-digit base identifier and a two-digit revision number of the form xxxxxx-yy.

Solaris patches are cumulative: Later revisions of the same patch base ID contain all of the functionality delivered in previous revisions.  For example, patch 123456-02 contains all the functionality of patch 123456-01 plus the new bug fixes or features that have been added in Revision 02.  Changes are described in the patch’s README file.

Patches usually do not contain all the binaries (i.e., patches are sparse) that had been shipped with the package they update.  Patches may contain scripts prepatch and postpatch, to control installation or to update/fix configuration files.

The functionality delivered in a patch might have a code dependency on other patches. That is, patches aren’t cumulative like MS service packs.  If a patch depends on one or more patches, the patch specifies the required patches in the SUNW_REQUIRES field in the pkginfo file of the packages in the patch.

All applied patch data is kept in /var/sadm/pkg/packagename/.

Patching a server with multiple zones can take many hours!  For every patchadd command, patching would first occur for the global zone followed by patching in the non-global zone.  This time increases linearly with the number of non-global zones running in the system, since all patching is done sequentially.  Also, the time consumed to patch a whole root zone is more compared to a sparse root zone.

Patches for kernel binaries and low-level libraries such as libc, etc. need to be patched in single-user mode. This and several other restrictions are provided in the README file of that patch.  Since this requires significant down time you should schedule patching frequently during the server’s regular maintenance window.

The recommended solution to this is to use Live Upgrade, a system where you keep a mirror of the root partition.  You then patch the mirror without bringing down the system, then a quick reboot using this other root, then finish by patching the original root.  Each patch cycle you alternate between the two root partitions.

To add a Solaris patch use the “official” tools patchadd and patchrm.  Use patchadd -p to see list of installed patches.  A GUI tool (Solaris9+) is Sun Mgmnt Console (smc).

Another “official” tool is smpatch analyze|download|add | update.  This tool attempts to determine automatically which patches you need, DLs them, and patchadds them (update = add but adds dependent patches too).

Sun has new patching policies, as Sun now charges for support.  These policies seem to change frequently.  This has caused great pain to SAs trying to keep their systems safe, as old tools and procedures (and Internet patch repositories) keep breaking.

Sun recently released a new tool Sun Connection (updatemanager GUI and remote webapp) plus updated smpatch TUI that will hopefully makes patching much simpler.  Using Sun Connection, updates installed and removed while logged into the global zone will be applied to the global zone and all applicable non-global zones.  This works for Sun supported Linux systems too.  You can also create local patch servers (patchsvr) for your enterprise to use.

You must register with and get access to security fixes and hardware drivers with your registration.  You must have a Sun Service Plan (SunSpectrum, Solaris Service Plan, or Managed Services) to access the full range of patches, upgrades, and updates available.

Using pca   [get it ]

The latest version of the un-official, third party Perl script “pca” is currently the best (and most commonly used) way to automatically determine your system, read the local patch DB to see which ones you have installed (This is not the same database as for the package system), check the official Sun/Oracle repositories to see what patches you need and are available, download, and install them.  Note you still need to register with Oracle/Sun to have access to patches.

To use pca with LU, try a script similar to this:


# Have pca analyze a non-root filesystem

# and download the necessary patches:

if pca --xrefown --askauth --patchdir=$PATCHDIR \

    --ignore=123590 -R $ALT_ROOT -d missing


    echo "$0: Unzipping patches"

    (  cd $PATHCDIR

       for patch in *.zip

       do  unzip -q $patch

           rm $patch




# Now apply those patches:

luumount $BE_NAME

luupgrade -t -n $bename -s $PATCHDIR

[ -n "$CURRENT_ROOT" ] && lumount $BE_NAME

You can also do this with patchadd.

Solaris 10 has very bad performance of patching.  It can take many hours, and you can’t patch a running system!  It is virtually impossible to apply any kind of large patch bundle to Solaris 10 while meeting a reasonable SLA.

A method of working around this problem is recommended in the Sun blueprint document for boot disk layout.  You can use Live Upgrade (LU) by having an extra copy of your boot environment (BE) disk slice(s).  The BE consists of all parts of the directory hierarchy that may be updated by any patch, install, or update.  On the boot disk this is usually everything except /export (which contains home).

Assume you used slices 1 (for root) and 3 (for /var).  You just make the new boot environment (BE) from slices 4 (alt-root) and 5 (alt-var) which initially are identical to slices 1 and 3.

Next you upgrade or patch onto it the alternate BE, then boot from it.  The only outage is the reboot, and the back-out plan is simply another reboot from the original BE.  You can do the next upgrade or patch onto slices 0 and 2.  Meanwhile you just run from the new BE.

The disadvantages are that you have to design for it, the machine spends half its life running from “strange” slices, and you don’t have any spare slices at all (so it would not work if you wanted /opt on a slice say).

Another way of patching/updating a system with mirrored (boot) disks would be to break the mirror then use one of the sub-mirrors for live upgrade while continuing to run on the other disk(s).  That works but the server has no redundancy until you have completed the upgrade, rebooted, and re-synced the mirrors.  And the moment you rebuild the mirrors you no longer have a simple/quick way back to the old system.  You would have to restore the boot disk from a backup, which means the back-out takes a long time.

For patching Windows clients, Microsoft provides Windows Server Update Services, otherwise known as WSUS.  WSUS provides a way to consolidate updates on one server, distribute them out to clients, and apply only approved updates from approved categories at approved times.  This is much better than manual updates, or automatic updates that can break SLAs.  (Note Microsoft generally releases patches on the second Tuesday of every month.)

Solaris 11 uses its (new) package system to distribute patches, rather than use a separate patch facility.  Patches are available from using the appropriate support repository.

Linux kernels do support patching.  In most cases, you can patch the live system without a reboot required.  This requires the ksplice package installed.

Package Management

Software comes in three types: packages, source code, and patches.  (A fourth type would be to download an executable that needs no installation.)  A vital part of a SAs job is installing, maintaining (updating), and removing system software and applications, installing patches, and configuring all software including the kernel.

It is important to install software in the right places on your system.  Qu: After you have install all the stuff on the distribution CDs, when you install additional software beyond that, where will you put it?  Ans: Modern systems have standardized where the stuff goes; see below.

A System roadmap called the filesystem hierarchy standard (FHS) can be found at: (or  See also hier(7) on Linux, filesystem(5) on Solaris.  (Each distro has its own conventions but all are similar to the FHS.)

Red Hat has recently (2012) decided to eliminate most of the non-changing bits of the OS, from the root filesystem to the /usr filesystem.  /lib, /bin, and others, will end up as symlinks or bind mounts.

·                 Other important locations include /sys, /proc, and /proc/sys – where you can view most and change some kernel parameters.  (See also sysctl.conf and sysctl cmd on Linux and /etc/system and ndd cmd on Solaris).

If you are installing from packages you may not have much of a choice, as the package author will determine where the software gets installed.  Some packages and all source give you a choice as to where to install.  Most SAs learn over time it is best to keep such “add-on” software in a different location from the distribution software, usually a separate partition.  Doing so makes it easier later to upgrade the distribution while re-installing the add-on software.

/usr/local versus /opt

These locations are for post-install, non-vendor supplied software.  They are meant to make it easy to determine what was installed with the system, and to upgrade the OS.  During an upgrade these locations should remain untouched (although some testing is needed, since what was once optional software may now be part of the main distribution or else been superseded by some other package.)  For this reason it is often best to make this a separate storage volume.

You probably shouldn’t use both locations, but sometimes the packages you install will only go in one or the other location.  (Or even mixed in with system software in standard directories such as /usr/bin.)  Whichever location you prefer, make the other a symlink to it so all your additional software ends up in a single location.

/usr/local is the easier hierarchy to maintain.  All the standard system directories are duplicated under that: bin, sbin, lib, etc, man, ...  That makes it easy to maintain PATH, MANPATH, and LDPATH.  However name conflicts could occur (imagine installing two different LDAP packages, both with etc/ldap.conf).

/opt is more complex than /usr/localUnder /opt are subdirectories for each package (or each vendor, which in turn may contain per package subdirectories).  Each package (or vendor) directory contains the standard system directories except for etc and var; config files go under /etc/opt/packageOrVendor and data goes under /var/opt/packageOrVendor.  (Also /var/run/opt is used.)  This is more flexible since a user can set their *PATH variables to use any installed packages they want.  Sadly most users don’t want to set these!  Also, if there is lots of software add-ons, *PATH variables grow very long and become difficult to work with; the user may not be running the program they thought!

To simplify the configuration of *PATH there are reserved directories /opt/{bin,lib,doc,include,info,man} that an SA can use to hold symlinks to the real files under /opt/package/*.  So only those directories need be added to *PATH.

Personally I don’t find many conflicts, and most software is installed by packages anyway in the standard locations and not in either /usr/local or /opt.  So I prefer the /usr/local approach as it is simpler to maintain.

To keep all configuration information in one place, you could create /etc/local and make /usr/local/etc a symlink to there.  However you will lose your configuration data during on upgrade using that approach.  So I would suggest modifying your backup procedures instead to include /usr/local/etc.

Note that most official packages won’t have conflicting names on Unix systems.  On Linux this is more common and can be handled by the alternatives system.

Of course a lot of software packages don’t follow any standard!  Remember /opt or /usr/local should be a partition, hopefully under LVM so it can be easily grown when needed.

You may need to configure syslogd and log rotation, and security (PAM, firewall, etc.) for all newly added software!  (Discussed later in the course.)

Most *nix systems (except Solaris) use /usr/local, not /opt/*.  Most Linux packages will install in /usr; software compiled from source goes in /usr/local by default.

Types of Packages

Most OSes include a package system to easily install and update software.  A package is usually a compressed archive file with some standard contents.  A package system looks inside for pre- and post- install scripts to actually install and initially configure the package (and other scripts to uninstall it), package version data, package/library/other dependency information, file lists, and other information.  Some will include digital signatures or hashes (checksums) used to validate the package.

There are many advantages to using packages, including auto-updating of your system (say by cron every night) to insure security and other critical packages are installed in a timely manner.  If you don’t want that, then there are websites and mailing lists you can join to keep informed of new or updated packages for your system.  A package database is an easy way to inventory your system and to track versions.  The dependency information insures you aren’t missing any vital components.

Advantages of using source include that packages may not be installed where you want them (e.g., /usr/local versus /opt), whereas with source code you can always chose where to put stuff.  A package may not be available for your system (or may be an older version), or may be in rpm format when your system uses a deb package database.  (Not all packages are written correctly, and thus may not be translatable from one format to another!)

Packages can be either binary or sourceSource packages install the code, which must then be configured, compiled, and installed.  These are the most portable and configurable types of packages.  Installing the source package provides the advantages of having source code with the convenience of using a package system.

Binary packages are much easier to install but will break if some dependent library or sub-system is not configured the way the package author expected.  Often it won’t matter but missing system admin utilities, the location of fonts, names and locations of configuration files, etc., could cause some package to install but not work.

A sparse package contains only those objects that have been altered since the vendor first delivered the package version in a distribution.  These are used for patches.  When code changes are provided with sparse packages, these packages enable small patches rather than redistributing complete, but large, packages.  Sparse packages also minimize the changes made to the customer’s environment.

Different distributions use different package management systems.  While anyone can create their own type of package, there are currently few types commonly used:  Debian (“.deb” packages), Slackware (“.tgz” packages), Red Hat (“.rpm” packages), and Solaris pre-11 and BSD (“.pkg”, actually SVR4, packages) are common.  Of these, the deb format was the first to include accurate dependency lists in the format.

The RPM format has been standardized by the LSB project for Linux.

A package management tool (both GUI and TUI) can read packages and correctly install, update, and remove packages or sets of packages of a given type.  The Debian tools (e.g., apt-get) will install a package by contacting an FTP site, downloading the latest (stable) version of the requested package, check all dependencies, and if needed download and install those as well.  The Solaris package tools and the Red Hat tools (e.g., yum) include these features too.  Note the Slackware package name (.tgz) is unfortunately the same extension commonly used for source code tar-balls.

A package system includes a package database, keeping track of which packages (and which versions) are installed on the local system.  This DB is used to determine if a package’s dependencies are satisfied.

Some distributions support two or more package management systems.  It is important to use only a single system!  If you have installed a Debian package and attempt to install an RPM package later that depends on the first package, it won’t know that the Debian package has been installed, because each system uses a different database!  (The smart package management system at aims to allow you to use any type of package on a single system.)

For the same reason, when installing from source, it pays to build a package from it and update your database, so the system knows that software is installed already.  Otherwise, the packages that depend on it won’t know that the software has been installed.

Finding packages one at a time and tracking down the dependencies and installing them can be a pain.  Keeping current is also difficult, as you need to check each package to see if a newer version is available and should be installed.  A good package management tool will automate these tasks.

The Internet contains repositories of RPM packages for various distributions (and for noarch packages that should work on nearly any system) such as,, and rpmfind.netLarger organizations can (and should) create their own internal repository (a software depo).

Note that the extension “.rpm” is a registered MIME-type for Real audio, so clicking on such a file link on some web page will probably launch RealAudio to play it!  Be careful to right-click the link and chose Save Link Target As... (or the similar choice for your web browser).

Additional package management tools can be used to access a configured list of repositories, compare the contents of those repositories and your system’s installed package DB, find needed packages including any dependent packages needed, download them all and install them.  Such tools can be used to search the repositories for new packages to install as well as maintain all installed packages with the latest versions.

Use alien to convert packages from one format to another: deb to rpm, rpm to deb, etc.  (Demo: Show URL “”, install and try it out.)

RPM Packages

Newer RPM systems use a GPG key to verify packages (RH uses this; most other repositories just use MD5 digests/checksums). The GPG key for each repository or package source must be downloaded (safely!) and imported:
   rpm --import /usr/share/doc/rpm-version/RPM-GPG-KEY

To display the list of all installed repo keys, use: rpm -qa gpg-pubkey*.  (See also the rpmkeys command.)

To show details on a key, use rpm -qi name_of_key.

To verify the signatures on downloaded packages, use rpm -K *.rpm.

Using digital signatures is much safer than relying on MD5 checksums.  In 2004, MD5 was shown to have some minor theoretical weaknesses.  In 2008 a more severe weakness was discovered.  It is possible for virus writers to craft a virus infected file to have the same MD5 sum as the real file!  See for details.

There are tools that can convert packages and other tools that can create packages.  These are useful since you must maintain a single accurate database of packages on your system.  However these tools won’t work in all cases.

(See RPM_Guide for details of how to create a spec-file, used to create RPM packages with rpmbuild.)

Download all the packages in a set (e.g., KDE) into some directory and install them all at once with: rpm -Uvh *.rpm (-U=install or update if newer, -v=verbose, -h=show progress bars).  It is best to install RPMs all at once; the tool can resolve circular dependencies amongst them.  You can install packages directly from the Internet by providing a URL, e.g.:

   rpm -ivh\

To remove (“erase”) RPM packages, use:  rpm -e packageName.

To view information (“query”) about installed RPMs:  rpm -q packageName.

To view information about packages downloaded but not installed, add the “-p” option” and list the pathname of the RPM package.

To do a case-insensitive search:  rpm -qa | grep -i packageName.

To see what packages were recently installed, use: rpm -qa --last | head

To view information about what package some file is in:  rpm -qf file (use absolute pathname).

Use “-ql” to list files in a package, “-qd” to see the documentation, “-qc” to see the configuration files, and “-qi” to see general package information.

There are nearly 20 rpm utilities available on Red Hat and Fedora, and about 20 additional ones for developers.  See “rpm<tab><tab>”.

The RPM database (at /var/lib/rpm) may become corrupted or out of date.  You can rebuild the DB indexes with: rpm --rebuilddb.  If that fails, try: rpm --initdb (will create if missing, otherwise works like fsck), and then try the rebuild.  As a final resort: rm /var/lib/rpm/__db.*; rpm --initdb; rpm --rebuilddb.

It is often useful to see which packages were installed, and which ones had files subsequently modified.  (Not every SA keeps an adequate journal!)  Knowing which files (especially config files under /etc) have been modified tells you where to look for local changes.  You can use this pipeline to find out:

   rpm -qa |xargs rpm --verify [--nomtime]

This will result in output such as this sample:

   ....L... c /etc/pam.d/system-auth
   S.5..... c /etc/rc.d/rc.local
   S.5..... c /etc/ssh/sshd_config

The meaning of this output can be found in the “Verify Options” section of the rpm man page (S=size, 5=MD5, ...).  In brief, every package’s files are checked against what the packages says for them.  Only files that appear modified will produce output.  Of course some changes are expected.  (Demo with at.)

Yum Package Management Tool

Red Hat package management includes GUI tools and system update tools such as yum (Yellow Dog Update Manager; Show yum.conf):  yum [-y] update [pgk].  These commands work recursively, in that if one package needs another, both will be fetched and installed.  These tools can be run via cron automatically each night.

By default, the GUI runs a yum update process when it launches.  You can turn that off from the GUI, somehow.  (Try the gpk-prefs tool.)  Note the package database is locked by yum, so only one process can use it at a time.  So if some update process is running, you won’t be able to run any other yum (or some rpm) tools.

At boot time, PackageKit also tries to refresh the yum database.  This also will prevent you from running yum immediately.  You can disable that by editing /etc/yum/pluginconf.d/refresh-packagekit.conf, and changing “enabled=1” to “enabled=0”.

Rather than work with RPM directly, it is often easier to use yum.  To install a package use yum install package.  You can also use yum to update, search, show which package provides a file, and remove packages.  info will show package information, list will show installed packages.  See the man page for more details.

Use --skip-broken ... to skip packages with broken deps.  Also yum grouplist and yum groupinstall (e.g. KDE).  Or, you can use regular yum commands and specify groups with “@name”.  Note, when installing groups, only “mandatory” packages are installed by default.  (yum grouplist will show which packages are optional and which are mandatory.)

To install the optional packages in some group too (that is, all packages in the group), you need to change a setting in yum.conf, or override it on the command line with the option:  --setopt=group_package_types=optional

The Yum shell can be entered via yum shell when doing a lot of yum commands.  It has a few commands that aren’t available at the command line: config to set configuration options, ts will show you the transaction set (or reset it).  The repo command will let you list, enable, and disable repos.  To run the various commands you’ve entered, use run.  If you’re not sure what commands the shell has, run help (or check the yum-shell man page).  You can exit the yum shell with exit or quit.

If the RPM DB was corrupted and repaired, chances are good you need to rebuild the yum DB too; run yum clean all (after fixing RPM’s DB).  If a yum transaction was interrupted, you can attempt to complete it with the command yum-complete-transaction.  Running yum check will examine the DB for errors.  Other useful commands include package-cleanup and yumdownloader.  See yum-utils(1) for a list.

If some upgrade has broken you system, you can try yum downgrade package.

Note the yum.conf file that installs by default may not include a good list of repositories in /etc/yum.repos.d/.  You should edit this list, adding some Fedora repos such as RPM Fusion; I also use the Adobe Flash Player yum repo.  Search the Internet for other yum repositories ( for “fedora yum repository”).  Oracle provides a public yum server for its Linux, which is nearly completely compatible with RHEL (and Cent OS), at

To view all installed repos, look in /etc/yum.repos.d/.  You can use the yum repolist command too, but by default that only shows enabled repos.  To enable all repos, use “yum --enablerepo="*" repolist”.  This will also show how many packages are available from those repos.

To see what packages are available in a single repo, use:

   yum --disablerepo="*" --enablerepo="myrepo" \
   list available

For CentOS, adding additional repos such as rpmfusion won’t work unless you first install Extra Packages for Enterprise Linux (or EPEL) repo.  This is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL),CentOS and Scientific Linux (SL).  To install EPEL repo for CentOS 6, use:

   rpm -ivh epel*.rpm

Then you can install the rpmfusion reposs as normal.  For CentOS 6:



   rpm -ivh rpmfusion*.rpm

Package Version Numbers

RPM package names look like any of the following:

·       name

·       name.arch

·       name-ver

·       name-ver-release

·       name-ver-release.arch

·       name-epoch:ver-release.arch

·       epoch:name-ver-release.arch

·       ...~tag   (e.g., “foo-1.0~beta”)

where the version and release may be two or three levels:  major.minor.patchArch is either some distro name, some hardware indication, or both.  For example: nmap-4.68-3.fc10.i386.rpm.  Release is usually dependent on the distribution.  Names containing ~tag are considered older than the same version without the tag, even if released later, since RPM v4.10.  (Debian does this too.)

Source packages use “src” instead for the arch.  “noarch” is used if the package doesn’t depend on a particular system architecture (i386, i686, ppc, ia64, ...), such as a script or documentation.  Not all RPM packages are named to this standard however.  (And there are other std.s, e.g. MS .net, Office.)

A new (2009) yum plug-in (extension to yum) called presto enables support for “deltaRPMs”.  These are sparse packages with the extension “.drpm”.  They contain binary patches (created with a variant of bsdiff) to the previous version of that package (which must be installed first).  Presto is enabled by default since Fedora 12.

An RPM package is a binary file that contains some header data and a gzip-ed (or 7zip-ed) cpio archive of the actual files.  (You can extract the cpio archive from a package with rpm2cpio, then view its contents with the command cpio -itv package.rpm |less.)

The yum system comes with a number of additional utilities that are useful; see yum-utils(1).  For example, you can use package-cleanup --leaves --all to see which packages have no other packages that depend on them.  (This can be used when removing one package to also remove other packages that are no longer needed:
    package-cleanup --leaves --all >before
    yum erase some-package
    package-cleanup --leaves --all > after
    diff before after

Or, you can yum install rpmorphan (also from  The package-cleanup utility is very useful; see the man page for details.

The rpm -q command may appear to show duplicate packages.  This happens when the default output format doesn’t show the arch, since a package may be installed for several architectures.  (Fedora 9 on x64 does that.)  The output format can be changed to include the arch if it doesn’t show by default, by using:
  rpm -qa --queryformat "%{name}-%{version}-%{release}.%{arch}\n"
or by adding the line:
 %_query_all_fmt %%{name}-%%{version}-%%{release}.%%{arch}
to either /etc/rpm/macros or ~/.rpmmacros.

Debian Packages

The Debian package management system has been regarded as superior to RPM (fewer dependency problems, but that’s likely due more to the single organization supplying packages than due to a better system).  In fact the tools have been ported to Red Hat like systems to work with RPMs.  If you configure the APT (Advanced Package Tools) tools to locate the correct repositories you can use these tools to install RPM packages.

One command line tool is dpkg.  Use dselect for a menu-driven console interface (GUI: synaptic).  Note that dpkg only manages a package once is has been fetched.  You use different tools to access packages, from flash drives, hard disk, CDs, or from some APT repository (the preferred method) on the Internet.

There is a simpler-to-use wrapper called apt-get [dist-]upgrade pkg.  This command is part of the apt package.  apt-get uses apt repositories listed in the file /etc/apt/sources.list (one per line):
deb stable main contrib non-free

Or you can use a command like this to add an apt repo:

  sudo add-apt-repository \
   "deb lucid partner"

To update your system takes two steps: apt-get update; apt-get upgrade.  The first refreshes the local list of packages.  (Yum does that automatically if the list is older than 20 minutes or so.)  Once your local cache of packages has been updated, you can also use apt-cache search, just like yum search.

One nice feature on Ubuntu and Fedora (optionally) is that when you run a command from the shell, if the command isn’t found but would have been if some package were installed, the error message says what package you need!  On Ubuntu, this searches only the local package cache, so if that is very old you may get incorrect information.  On Fedora, if the package lists are old, new ones are fetched, resulting in a long delay.  (Edit /etc/PackageKit/CommandNotfound.conf, and changed the last line from “MaxSearchTime=2000” to something like “500”.  (That changes the delay from two seconds to 0.5 seconds.)

The apt* tools manage their own cache.  The more repositories you have in sources.list the more cache memory you need.  To fix the cache settings edit /etc/apt/apt.conf and add (shows 24M; 8M or 16M is often enough):

APT::Cache-Limit "25165824"

Apt-get can be used to download and install packages.  (Demo:  apt-get install frozen-bubble.)  Use the tool dpkg-deb to inspect and to build deb packages.

The tools store and use the package information in /var/lib/dpkg.  Here you’ll find files describing all available packages and their status (e.g., installed or not).

DEB supports virtual packages: a generic name that applies to any one of a group of packages, all of which provide similar basic functionality.  (e.g., both the tin and trn news readers provide the “virtual package” called news-reader.  In this way other packages can depend on news-reader without caring which one you install.

Debian packages also support meta-packages, equivalent to a RPM package group.

The .deb files are ar archives, an ancient Unix archive format typically used for holding compiled C function libraries, such as in libc.a.  Debian package names have this form: name_version–release.deb.  The version is usually a 2 or 3 level number: major.minor[.patchLevel].

Debian stable systems are updated only very rarely.  Debian also maintains a more leading edge “testing” repository, but it can take years for packages in the testing repo to make it into the stable repo.  Backports are packages from the testing distribution recompiled for the current stable (or even oldstable) release to provide users of the stable distribution with new versions of certain packages, like the Linux kernel, the Iceweasel browser or the suite, without sacrificing the overall stability of the system.

Like the RPM system and yum, the Debian system and apt* include additional utilities that can help with managing your system.  For example, deborphan.

BSD and Solaris Packages

Software for Solaris versions up to 11is delivered in a format known as SVR4 packages.  Solaris packages can also be delivered in Package Datastream format.  In this format a single package file contains one or more SVR4 packages.  Package datastreams are easier to distribute.  Unbundle (or bundle) packages from (to) datastream format files using pkgtrans.

The file /var/sadm/install/contents has an entry for every file in the system that has been installed through a package.  Entries are added or removed automatically using the binaries installf and removef, utilities used in the package install/remove scripts.

The commands used on Solaris are pkgadd, pkginfo, pkgrm, pkgchk.  (See the man pages for details.)  Sun uses these to distribute device drivers too.  The BSD commands start with pkg_, for example pkg_add -r pkg.

To add a Solaris package foo.pkg that is in package datastream format:

mkdir /usr/local/foo; pkgtrans foo.pkg /usr/local/foo all; # unbundle
pkgadd -d /usr/local/foo all

(Solaris packages are now compressed with 7pa.  Make sure you have SUNWp7zip installed first!)  To query the Solaris package system you can use several commands.  For example to determine which package a given file belongs to, use “pkgchk -lp /path/to/file”.  Use “pkginfo -l package” to see info about some package.

Solaris pre-11 has a repository for “official” packages, plus two others: (formerly at home of CSW (community software for Solaris) packages; see that usually install in /opt/csw, and (source packages only) that usually install under /usr/local.  These are popular and include lots of 3rd party software.  Blastwave is a private alternative to opencsw (you can’t use the two together).  It may be best to create /opt/local & mount on /usr/local via lofs, or use a symlink.

CSW packages install into /opt/csw with the binaries usually found in /opt/csw/bin.  The simplest way to download and install packages from opencsw is to install the package pkg_get.pkg.  Download it from and then run pkgadd -d pkg_get.pkg as root.  (You may also need to install either wget-i386 or wget-sparc as wget somewhere in your PATH.)  Then update the /opt/csw/etc/pkg-get.conf (or /etc/opt/csw/pkg-get.conf if you prefer) to use the closest mirror site to you and the appropriate subdirectory (unstable or stable).

Image Packaging System (IPS)

As of Solaris 11, the old package and patch systems have been replaced with a completely new and different system called the Image Packaging System (IPS).  IPS package repositories support a completely centralized architecture for managing a selection of software, multiple versions of that software, and multiple different architectures.  Administrators can control access to different software package repositories and mirror existing repositories locally for network restricted deployment environments.

IPS includes a number of command-line utilities including pkg(1) and graphical tools, Package Manager and Update Manager.  Additionally, IPS provides a MIME association of “.p5i” to allow for single click package installs.

IPS provides the ability to validate its consistency on a system and fix any software packages should any errors occur during that validation process.  IPS also provides an easy method of sending new software packages to a repository through a series of package transactions to add package content, package metadata and dependent system services upon installation to a publisher.  Administrators can easily create and manage new package repositories and associating publishers for local software delivery in an enterprise environment.

The new standard repositories are:     The release repository is the default repository for Oracle Solaris 11 Express 2010.11.  This repository will receive updates for each new release of the Oracle Solaris platform.            The support repository is a repository providing the latest bug fixes and updates.  Administrators will only be able to access this repository if they have a current support contract from Oracle.

While IPS packaging is the default system for Solaris 11, compatibility for older SVR4 software packages is preserved with pkgadd and related commands.  The Solaris 10 patchadd command and related commands are not available on Solaris 11as these have now been replaced with IPS package management tools.

Creating a local package repository:

It is common for organizations to create internal repos, and only allow servers and hosts to update/install from it.  This give control over when some update is applied (e.g., after testing).  In a larger organization, you may create several repos.  For example, “dist”, “testing”, and “current”.

Creating a repo for any package management system is fairly easy.  You need a directory (and sufficient disk space) to hold the packages that is network accessible (usually via http or FTP), and some index files that tools such as yum can use to determine which packages are where.  Digital signatures (or at least MD5 checksums) for the files must be made as well.  These files must be updated whenever you update the repo.  (Don’t forget to allow network access to your repo, through any firewalls!)

For Solaris, you follow create a storage volume, add packages, initialize the repo, and finally configure and enable the pkg service:

# zfs create zpool/export/s11ReleaseRepo
# pkgrecv -s \
  -d /export/S11ReleaseRepo '*'
# pkgrepo create /export/s11ReleaseRepo
# ... # configure Solaris service, steps not shown
# svcadm enable \

For Red Hat and other RPM based distros, the steps are similar (See the yum guides):

# mkdir -p /var/local_yum_repo
# cd /var/local_yum_repo
# cp my*.rpm .  # use yumdownloader to fetch
# createrepo .
# gpg --detach-sign --armor repodata/repomd.xml
# chmod -R a=rX .  # make all files readonly

The gpg step is optional, to sign the meta-data.  But make sure the matching public key is available, and your rpm packages were digitally signed with the matching key.  (The steps for this are discussed in a later course, when you learn to build RPMs.)

Once setup, configure a web or FTP (or other) server to be able to access the files.  Then on the various hosts that will use this new repo, add a file to /etc/yum.repos.d/ with content like this:

name = This is my repo
baseurl = url://to/get/to/my/repo/

Installing from source code:

Using tar review: tar -c|t|x -v -z -f file files...
tgz (tar-balls):  (Demo gcal.tgz: unpack (tar -zxvf file), view README, INSTALL, ..., then run these three steps:

 ./configure --help
 ./configure --with-included-regexps # needed on F12
make;  su -c "make install"

(More details on working with source code are given in CTS-2322, where you will learn how to use make, RCS, a C compiler, and other tools.)

Restarting After Installing:

Once you have installed new software, you must apply the changes by restarting any applications or services that are using the old version, or by rebooting the system.  When a new kernel is installed, you must reboot to use the new one.  But for other software, you can simply restart the affected applications and daemons.

For on-demand services such as FTP or IMAP, you need to kill manually the running process; it should restart automatically as needed.  For stand-alone daemons such as Apache, you must restart them.  (Managing services and daemons is discussed later in the course.)  You should also restart any running applications (such as Firefox, or even better, the whole GUI) when they are updated.

When DLLs (shared objects) are updated, every running application or service that uses them must be restarted.  Note that failure to do so may lead to application and daemon crashing.  To find the running processes that use a given DLL such as, use the command lsof /lib/\*.  Note that some DLLs such as libc (on Linux, glibc) are probably used by everything, so a reboot may be simpler.

Alternatives for Linux

When you install two (or more) subsystems for the same purpose (for example, the printing subsystem, the mail subsystem, etc.), they may use conflicting commands and files.  For example the standard print commands are /usr/bin/{lpr, lpq, lprm}, various man pages, etc.  Each different print subsystem replaces these with its own version.  Obviously only one subsystem can be in use at once (unless you rename one set of files), so your choices are:
          Only install one version of a given type of subsystem.
          Install both versions, and resolve conflicts with symlinks.

Alternatives manages sets of symlinks for various subsystems.  Alternatives replaces commands with symlinks into /etc/alternatives/*, which in turn point to the actual commands (e.g., lpr.lprng, lpr.cups).  For this to work the package must name the commands the way that alternatives expects.

The alternative command allows one to change entire sets of symlinks at once.  It also has “sets of sets”.  Some service sets are “slaved” to “master” sets; changing the master also changes all the slaves.  When an alternatives-friendly package is installed, the post-install script will invoke the alternatives command to set up the symlinks.

Note that alternatives is not a universal Unix/Linux feature,  It started with Debian (to complement the virtual package concept) and was imported into Red Hat.  If your distro doesn’t have this you must manually deal with conflicts or else not install conflicting packages.

The Fedora alternatives system manages several important subsystems (you can always add others), including: mta (mail service: postfix vs. sendmail vs. exim) and print (print service: lpr vs. cups).  (Oracle Java packages can be converted to be alternatives-friendly; see  Then you have java and javac service sets.)

To see which service sets are managed “ls /var/lib/alternatives”. 

Trouble-shooting RPM

If you see “xxx is needed by yyy” errors when running yum update, try this:

Error: Missing Dependency: xxx = version is needed by package yyy

# yum --exclude=yyy update  # Or exclude xxx
Or?? :  # rpm -qf /lib/xxx  # says package zzz
     # yum --exclude=zzz update

This problem is usually caused by faulty RPM dependency info.  Sometimes you need to exclude xxx, sometimes yyy, and sometimes zzz.  If you can’t wait until the dependency problem is fixed, update excluding yyy, then download the rpm for yyy (from say and manually install with:

    # rpm -Uvh --nodeps yyy.rpm

Of course this can be dangerous!  If you need yyy but not xxx, try yum erase xxx, then yum update.  If an older version of xxx is available that doesn’t cause problems it will be installed for now, and updated later.  Otherwise you will need to “yum install xxx” later yourself, after the problem has been fixed.

The “livna” repo is notorious for not using the same dependencies as any other repo.  However it does contain packages not found elsewhere.  Best current practice if using yum is to install the livna repo but have it disabled.  When you need some package that is only found at livna, use the yum option to enable that repo during the install (and subsequent updates) of that package:

          # yum --enablerepo=livna.repo --skip-broken ...