PHP Magic Quotes officially discontinued.

Magic Quotes (which is a feature of PHP, not Apache) is no longer considered good practice, as it introduces some security problems. This feature is deprecated in PHP 5.3, and removed entirely from PHP 5.4 and later. For versions of PHP that do support it, it can only be enabled or disabled at the system level, not for individual users.

Here is the official documentation.

The Wikipedia entry includes a discussion of the problems with Magic Quotes.

You don’t need to ‘know’ Linux to use Linux

Lately, I’ve been noticing stories about how to use Linux you need to know half-a-hundred Linux shell commands and the like. Ah, what century are you from? Today, if you can see a window and handle a mouse, you’re ready to use Linux.

And no, I’m not talking about how we’re all already using Linux in devices like the TiVo or the Droid smartphone and through Linux-powered Web sites like Google. I’m talking about using Linux on the desktop.

There is nothing — I repeat, nothing — that requires any special knowledge to use Linux on the desktop today. If you’ve already mastered Windows XP, you’ll have little more trouble moving to a Linux desktop like Red Hat’s Fedora 12; Novell’s openSUSE 11.2; or Canonical’s Ubuntu 9.10 than you would in switching over to Windows 7.

I’m not saying using Linux isn’t different from running Windows. It is. For example, you’ll need special software like Crossover Linux to run Windows-specific software.

The interfaces also aren’t the same — but then, Windows 7 and Vista’s interfaces aren’t the same as XP’s, and Mac OS X’s Aqua interface doesn’t look anything like the others. Besides, can any other operating system besides Linux let you set up the interface so that it duplicates XP’s look and feel? I think not!

What you don’t need to use desktop Linux is to learn dozens of obscure Linux shell (aka command line) programs to get work done. Neither do you need to know how to edit configuration files by hand to get Linux set up properly.

Sure, it can help to know how to use the Unix/Linux shell. I was writing shell (awk, sed, and grep) scripts to get work done in Unix, and later Linux, before many of you played your first game of solitaire on Windows 1.0. My point is, for ordinary, everyday use, you don’t need to know anymore about those things than you need to know how to edit Windows’ registry to run Windows.

I use desktop Linux every day, and I’m a Linux expert. Do you know how often I turn to a terminal to get to a shell to run commands? Maybe once a month, if that.

Between the two major Linux desktop interfaces, KDE and GNOME, Linux has you covered. For applications, many of the most popular applications, such as Firefox and OpenOffice, run just the same on Linux as they do on Windows. For other end-user programs, Linux programs such as Evolution for e-mail and Pidgin for IM are just as good, if not better, than their Windows equivalents. And again, you don’t need to know anything special to use them.

Installing new software on Linux isn’t any trouble either. Better still, major Linux distributors like Ubuntu are continuing to make installing Linux software easier than ever with programs like Ubuntu Software Center.

Don’t get me wrong: if you’re running a Linux server, you really need to know Linux’s technical guts. But you know what? If you’re running a Windows server, you also need to know Window’s version of the shell, the PowerShell.

No matter what desktop operating system you’re running, if you really want control over exactly what it does, you need to know how to manage its command line tools. But for day-to-day use, Linux’s graphical interfaces makes it just as easy to use as Windows or Mac OS X. Pretending that you need to be some kind of computer wizard to run Linux on the desktop today is just downright silly.

40 years of Unix



A U.S. Department of Justice consent decree enjoins AT&T from “engaging … in any business other than the furnishing of common carrier communication services.”

Mar. — AT&T-owned Bell Laboratories withdraws from development of Multics (Multiplexed Information and Computing Service), a pioneering but overly complicated time-sharing system. Some important principles in Multics will be carried over into Unix.

Aug. — Ken Thompson at Bell Labs writes the first version of an as-yet-unnamed operating system, in assembly language for a DEC PDP-7 minicomputer.

Thompson’s operating system is named Unics, for Uniplexed Information and Computing Service and a pun on “emasculated Multics.” (The name is later mysteriously changed to Unix.)

Feb. — Unix moves to the new Digital Equipment Corp. PDP-11 minicomputer.

Nov. — The first edition of the “Unix Programmer’s Manual,” written by Ken Thompson and Dennis Ritchie, is published.
Dennis Ritchie develops the C programming language.

Unix matures. The “pipe,” a mechanism for sharing information between two programs, which will influence operating systems for decades, is added to Unix. Unix is rewritten from assembler into C.

Jan. — The University of California at Berkeley receives a copy of Unix.

July — “The UNIX Timesharing System,” by Dennis Ritchie and Ken Thompson, appears in the monthly journal of the Association for Computing Machinery (ACM). The authors call it “a general-purpose, multi-user, interactive operating system.” The article produces the first big demand for Unix.

Bell Labs programmer Mike Lesk develops UUCP (Unix-to-Unix Copy Program) for network transfer of files, e-mail and Usenet content.


Unix is ported to non-DEC hardware: Interdata 8/32 and IBM 360.

Bill Joy, a graduate student at Berkeley, sends out copies of the first Berkeley Software Distribution (1BSD), essentially Bell Labs’ Unix V6 with some add-ons. BSD becomes a rival Unix branch to AT&T’s Unix; its variants and eventual descendents include FreeBSD, NetBSD, OpenBSD, DEC Ultrix, SunOS, NeXTstep/OpenStep and Mac OS X.

4BSD, with DARPA sponsorship, becomes the first version of Unix to incorporate TCP/IP.


Bill Joy co-founds Sun Microsystems to produce the Unix-based Sun workstation.

AT&T releases the first version of the influential Unix System V, which will become the basis for IBM’s AIX and Hewlett Packard’s HP-UX.

Ken Thompson and Dennis Ritchie receive the ACM’s Turing Award “for their development of generic operating systems theory and specifically for the implementation of the UNIX operating system.”

Richard Stallman announces plans for the GNU (GNU’s not Unix) operating system, a Unix look-alike composed of free software.

At the Winter USENIX/UniForum meeting, AT&T describes its support policy for Unix: “No advertising, no support, no bug fixes, payment in advance.”

X/Open Co., a European consortium of computer makers, is formed to standardize Unix in the X/Open Portability Guide.

AT&T publishes the System V Interface Definition (SVID), an attempt to set a standard for how Unix works.

Rick Rashid and colleagues at Carnegie Mellon University create the first version of Mach, a replacement kernel for BSD Unix intended to create an operating system with good portability, strong security and use in multiprocessor applications.
AT&T Bell Labs and Sun Microsystems announce plans to co-develop a system that would unify the two major Unix branches.

Andrew Tanenbaum writes Minix, an open-source Unix clone for use in computer science classrooms.

The “Unix Wars” are underway. In response to the AT&T/Sun partnership, rival Unix vendors including DEC, HP and IBM form the Open Software Foundation (OSF) to develop open Unix standards. AT&T and its partners then form their own standards group, Unix International.

The IEEE publishes Posix (Portable Operating System Interface for Unix), a set of standards for Unix interfaces.

Unix System Labs, an AT&T Bell Labs subsidiary, releases System V Release 4 (SVR4), its collaboration with Sun that unifies System V, BSD, SunOS and Xenix.

The OSF releases its SVR4 competitor, OSF/1, which is based on Mach and BSD.

Sun Microsystems announces Solaris, an operating system based on SVR4.

Linux Torvalds writes Linux, an open-source OS kernel inspired by Minix.

The Linux kernel is combined with GNU to create the free GNU/Linux operating system, which many refer to as simply “Linux.”

AT&T sells its subsidiary Unix System Laboratories and all Unix rights to Novell. Later that year Novell transfers the Unix trademark to the X/Open group.

Microsoft introduces Windows NT, a powerful 32-bit multiprocessor operating system. Fear of NT will spur true Unix standardization efforts.

NASA invents Beowulf computing based on inexpensive clusters of commodity PCs running Unix or Linux on a TCP/IP LAN.

X/Open merges with Open Software Foundation to form The Open Group.

U.S. President Clinton presents the National Medal of Technology to Ken Thompson and Dennis Ritchie for their work at Bell Labs.

Apple releases Mac OS X, a desktop operating system based on the Mach kernel and BSD.

The Open Group announces Version 3 of the Single UNIX Specification (formerly Spec 1170).

Linux Basics


1. What is Linux?

Linux is a free Unix-type operating system for
computer devices. The operating system is what
makes the hardware work together with the software.
The OS is the interface that allows you to do
the things you want with your computer. Linux
is freely available to everyone. OS X
and Windows are other widely used

Linux gives you a graphical interface that makes
it easy to use your computer, yet it still allows
those with know-how to change settings by adjusting
0 to 1.

It is only the kernel
that is named Linux, the rest of the OS are GNU
tools. A package with the kernel and the needed
tools make up a Linux distribution.

href=””>Mandrake ,
href=””>Gentoo and Redhat are some of the
many variants. Linux OS can be used on a large
number of boxes, including i386+ , Alpha, PowerPC
and Sparc.

2. Understanding files and folders

Linux is made with one thought in mind: Everything
is a file.

A blank piece of paper is called a file in the
world of computers. You can use this piece of
paper to write a text or make a drawing. Your
text or drawing is called information. A computer
file is another way of storing your information.

If you make many drawings then you will eventually
want to sort them in different piles or make some
other system that allows you to easily locate
a given drawing. Computers use folders to sort
your files in a hieratic system.

A file is an element of data storage in a file
system. Files are usually stored on harddrives,
cdroms and other media, but may also be information
stored in RAM or links to devices.

To organize our files into a system we use folders.
The lowest possible folder is root / where you
will find the user homes called /home/.





Behind every configurable option there is a simple
human-readable text file you can hand-edit to
suit your needs. These days most programs come
with nice GUI (graphical user interface) like
Mandrakes Control Center and Suses YAST that can
smoothly guide you through most configuration.
Those who choose can gain full control of their
system by manually adjusting the configuration
files from foo=yes to foo=no in an editor.

Almost everything you do on a computer involves
one or more files stored locally or on a network.

Your filesystems lowest folder root / contains
the following folders:
/bin     Essential user command binaries (for use
by all users)
/boot     Static files of the boot loader, only
used at system startup
/dev     Device files, links to your hardware devices
like /dev/sound, /dev/input/js0 (joystick)
/etc     Host-specific system configuration
/home     User home directories. This is where you
save your personal files
/lib     Essential shared libraries and kernel
/mnt     Mount point for a temporarily mounted
filesystem like /mnt/cdrom
/opt     Add-on application software packages
/usr     /usr is the second major section of the
filesystem. /usr is shareable, read-only
data. That means that /usr should be shareable
between various FHS-compliant hosts and
must not be written to. Any information
that is host-specific or varies with time
is stored elsewhere.
/var     /var contains variable data files. This
includes spool directories and files, administrative
and logging data, and transient and temporary
/proc     System information stored in memory mirrored
as files.

The only folder a normal user needs to use is
/home/you/ – this is where you will
be keeping all your documents.




Files are case sensitive, “myfile” and “MyFile”
are two different files.

For more details, check out:

3. Understanding users and permissions

Linux is based on the idea that everyone using
a system has their own username and password.

Every file belongs to a user and a group,
and has a set of given attributes (read, write
and executable) for users, groups and all (everybody).

A file or folder can have permissions that only
allows the user it belongs to to read and write
to it, allowing the group it belongs to to read
it and at the same time all other users can’t
even read the file.

4. Who and what is root

Linux has one special user called root
(this is the user name). Root is the “system administrator”
and has access to all files and folders. This
special user has the right to do anything.

You should never log on as this user unless
you actually need to do something that requires

Use su – to temporary become root
and do the things you need, again: never log into
your sytem as root!

Root is only for system maintenance, this
is not a regular user.

You can execute a command as root with:

su -c ‘command done as root’

Gentoo Linux: Note that on Gentoo Linux only
users that are member of the wheel group
are allowed to su to root.
id=toc5 name=toc5>
5. Opening a command shell / terminal

To learn Linux, you need to learn the shell command
line in a terminal emulator.

In KDE: K -> System
-> Konsoll to get a command shell)

Pressing CTRL-ALT-F1 to CTRL-ALT-F6
gives you the console command shell windows, while
CTRL-ALT-F7 gives you
href=””>XFree86 (the graphical interface).

is the standard XFree console installed on all
boxes, run it with xterm (press ALT F2
in KDE and Gnome to run commands).

Terminals you probably have installed:

* xterm
* konsole (KDEs terminal)
* gnome-terminal (Gnomes terminal)

Non-standard terminals should install:

* rxvt
* aterm

6. Your first Linux commands

Now you should have managed to open a terminal
shell and are ready to try your first Linux commands.
Simply ask the computer to do the tasks you want
it to using it’s language and press the enter
key (the big one with an arrow). You can add a
& after the command to make it
run in the background (your terminal will be available
while the job is done). It can be practical to
do things like moving big divx movies as a background
process: cp movie.avi /pub &.

6.1. ls – short for list

ls lists the files in the current working folder.
This is probably the first command to try out.
It as a number of options described on the ls



ls -al –color=yes

6.2. pwd – print name of current/working directory

pwd prints the fully resolved name
of the current (working) directory.

6.3. cd – Change directory

cd stands for change (working) directory and
that’s what it does. The folder below you (unless
you are in /, where there is no lower directory)
is called “..”.

To go one folder down:

cd ..

Change into the folder Documents in your current
working directory:

cd Documents

Change into a folder somewhere else:

cd /pub/video

The / in front of pub means that the folder pub
is located in the / (lowest folder).

7. The basic commands

7.1. chmod – Make a file executable

To make a file executable and runnable by any

chmod a+x myfile name=toc12>
7.2. df – view filesystem disk space usage

df -h

Filesystem Size  Used Avail Use% Mounted on

/dev/hda3   73G   67G  2.2G  97% /

tmpfs      2.0M   24K  2.0M   2% /mnt/.init.d

tmpfs      252M     0  252M   0% /dev/shm

The flags: -h, –human-readable Appends a size
letter such as M for megabytes to each size.
id=toc13 name=toc13>
7.3. du – View the space used by files and folders

Use du (Disk Usage) to view how much space
files and folders occupy.

du is a part of

Example du usage:

du -sh Documents/

409M    Documents

7.4. mkdir – makes folders

Folders are created with the command mkdir:

mkdir folder

To make a long path, use mkdir -p :

mkdir -p /use/one/command/to/make/a/long/path/

Like most programs mkdir supports -v (verbose).
Practical when used in scripts.

You can make multiple folders in bash
and other shells with {folder1,folder2} :

mkdir /usr/local/src/bash/{old,new,dist,bugs}

The command rmdir removes folders.

7.5. passwd – changes your login password

To change your password in Linux, type:


The root user can change the password
of any user by running passwd with the user name
as argument:

passwd jonny

will change jonnys password. Running passwd without
arguments as root changes the root password.

7.5.1. KDE

From KDE you can change your password by going:

* K -> Settings
-> Change Password
* K -> Settings
-> Control Center -> System
Administration -> User Account

7.6. rm – delete files and folders, short for

Files are deleted with the command rm:

rm /home/you/youfile.txt

To delete folders, use rm together with
-f (Do not prompt for confirmation) and
-r (Recursively remove directory trees):

rm -rf /home/you/foo/

Like most programs rm supports -v (verbose).
7.7. ln – make symbolic links

A symbolic link is a “file” pointing to another

To make a symbolic link :

ln /original/file /new/link

This makes /original/file and /new/link the same
file – edit one and the other will change. The
file will not be gone until both /original/file
and /new/link are deleted.

You can only do this with files. For folders,
you must make a “soft” link.

To make a soft symbolic link :

ln -s /original/file /new/link


ln -s /usr/src/linux-2.4.20 /usr/src/linux

Note that -s makes an “empty” file pointing to
the original file/folder. So if you delete the
folder a symlink points to, you will be stuck
with a dead symlink (just rm it).
7.8. tar archiving utility – tar.bz2 and tar.gz

is a very handle little program to store files
and folders in archives, originally made for tapestreamer
backups. Tar is usually used together with
href=””>gzip or
href=””>bzip2 , comprepssion programs
that make your .tar archive a much smaller .tar.gz
or .tar.bz2 archive.


You can use the program ark (K
-> Utilities -> Ark)
to handle archives in KDE. Konqueror
treats file archives like normal folders, simply
click on the archive to open it. The archive becomes
a virtual folder that can be used to open, add
or remove files just as if you were working with
a normal folder.

7.8.1. tar files (.tar.gz)

To untar files:

tar xvzf file.tar.gz

To tar files:

tar cvzf file.tar.gz filedir1 filedir2 filedir2…

Note: A .tgz file is the same as a .tar.gz file.
Both are also often refered to as tarballs.

The flags: z is for gzip, v is for verbose, c
is for create, x is for extract, f is for file
(default is to use a tape device).
id=toc21 name=toc21>
7.8.2. bzip2 files (.tar.bz2)

To unpack files:

tar xjvf file.tar.bz2

To pack files:

tar cvjf file.tar.bz2 filedir1 filedir2 filedir2…

The flags: Same as above, but with j for for

You can also use bunzip2 file.tar.bz2 ,
will turn it into a tar.

For older versions of tar, try tar -xjvf or -xYvf
or -xkvf to unpack.There’s a few other options
it could be, they couldn’t decide which switch
to use for bzip2 for a while.

How to untar an entire directory full or archives?


for i in `ls *.tar`; do tar xvf $i; done

.tar.gz: for i in `ls *.tar.gz`; do tar
xvfz $i; done

.tar.bz2: for i in `ls *.tar.bz2`; do tar
xvfj $i; done

Why Linux isn’t yet ready for synchronized release cycles

Ubuntu founder Mark Shuttleworth has again called for the developers of major open-source software programs and Linux distributions to synchronize their development and release cycles. He argues that consistent and universal adherence to a specific time-based release model would promote more collaboration between projects, ensure that users have access to the latest improvements to popular applications, and make the Linux platform a more steady and predictable target for commercial software vendors.

Shuttleworth wants to organize major releases into three separate “waves” which would each include different components of the desktop stack. The first wave would include fundamental components like the Linux kernel, the GCC compiler, graphical toolkits like GTK+, and development platforms like Python and Java. The second wave would include the desktop environments and desktop applications, while the third wave would be the distributions.

Although a unified release cycle would reduce much of the complexity associated with building a Linux distribution, the concept poses significant challenges and offers few rewards for software developers. Achieving synchronization on the scale that Shuttleworth desires would require some open-source software projects to radically change their current development models and adopt a new approach that isn’t going to be viable for many projects.
Understanding time-based release cycles

A time-based release cycle implies issuing releases consistently at a specified interval. The development process for projects that employ this model generally involves establishing a roadmap of planned features and then implementing as many as possible until the project reaches the code-freeze stage near the end of the interval, at which point the features that haven’t been finished get deferred. The focus shifts to debugging and quality assurance until the end of the interval, when the software is officially released.

This model works well for many projects, particularly the GNOME desktop environment. One consequence of this model, however, is that it forces developers to work incrementally, and it discourages large-scale modifications that would exceed the time constraints of the cycle. Sometimes that window just isn’t large enough to merge and test major architectural changes that were incubated in parallel outside of the main code tree.

When that happens, developers have to ask themselves whether the benefits of the new features outweigh the detrimental impact of the regressions (like with the GVFS adoption in GNOME 2.22, for example). Sometimes they have to decide to pull out features at the last minute or push back the release date to allow for more debugging. These are hard choices, and, as Shuttleworth himself notes, making those choices requires a lot of discipline.

Although time-based cycles can work well for some projects, attempting to force all projects to adopt this approach and then correlate these universally could seriously degrade the development process. If projects begin to depend on synchronization, then delays at any level of the stack would cause disruption to every other layer. This could put enormous pressure on individual projects to stick to the plan, even if doing so would be detrimental to the program and to its end users.

Introduction to Linux


Some of my readers today will be aware of a beautiful operating system that goes by the name of Linux. For those who are not already familiar, here is a brief introduction: Linux is a free open-source alternative to Windows and Macintosh. Based off of Unix, Linus Torvalds laid the framework for the kernel many years ago and then made the source code open to all. He still works on the kernel today, but he’s not alone; millions of programmers around the world work to improve Linux with their free time. They’ve worked hard to bring Linux to maturity, and as of the past couple years, it has reached a mature stage where the average computer user is more than capable of using it. In other words, you no longer need to know how a computer works or how to program in order for Linux to be useful to you.

So why am I bringing up this topic? Quite frankly, there aren’t enough Linux users accessing TechwareLabs, and I believe this needs to change.

Whether it’s because you’ve never heard of Linux, have an interest, or tried it years ago when it was still young and was disappointed, one thing is certain: you’re missing out. I’ll be elaborating further into Linux in future articles, but for now, here is a nice introduction.
What do you mean by open-source?

The source code is freely available on the internet per the GPL license. You are more than welcome to view the code, edit it, and republish a new product (assuming you know a thing or two about programming). The only catch is that you have to release your product under the very same GPL license.

This approach to software truly throws the concept of “proprietary” out the window, and is no doubt confusing to anybody who is business-minded. It’s a foreign concept for many as to why one would develop a product and not claim intellectual property rights. The Linux community, in general (though there are exceptions), does not seek to gain profit. Rather, they put their time into Linux for pride and the occasional “thank you.”
There are companies that sell Linux, though.

This is partially true. They’re still licensed under the GPL, which means they are required to release the source code to the general public. What companies such as Red Hat and Novell are doing is not selling the operating system, but rather they are selling support, primarily for servers. Even so, you can use their products for free. Red Hat Enterprise Linux has fees attached to it, but Red Hat sponsors an open-source community around Fedora, which is the free alternative, developed by programmers in their spare time. Similarly for Novell SUSE Linux Enterprise, there is a free alternative in openSUSE.
Windows works fine. Why should I use something else?

Here, we get to the heart of the matter. Why switch, you ask? What’s the point? Simply put, Linux is faster, more stable and above all, easier to use. The speed is due to higher efficiency in storing/retrieving information. The issue of stability isn’t even questioned by [knowledgeable] die-hard Windows fans. Ultimately, the most controversial claim I’ve made is that it’s easier to use.

This is where the argument rages on within the desktop market. There are many long-time Windows users who try Linux, and are scared off, upon which they claim that Linux is hard to use. The fact is, Linux is different, but I would argue that this is a good thing. There is definitely a learning curve, as there always is when you try something new, but the more you just play around with Linux, the more you’ll find it is simply better.
How is it better? What makes it easier?

Everything is better organized. For starters, you know that little program on Windows, Add/Remove Programs? Raise your hand if you’ve ever actually “added” a program using it.

I see a few hands from people who have via a NT system or something similar, but other than that, it is unlikely you’ve used Add/Remove for anything other than “remove” (though Vista does allow for the user to download programs directly from Microsoft, a feature suspiciously appearing long after Linux started doing the exact same thing). In Linux, this little program is called the “package manager”, and this is where you both add AND remove your programs. Everything that’s currently installed, as well as everything you’re able to install from the supplied servers appears in an easy-to-use catalog. For the most part, everything you need is right there in one place. Want to install an office suite? How about an IM program? Or how about a game? Just go to the respective section and choose the program you want. Check the boxes for everything you want to change (install/uninstall) and push the appropriate button to update your system (specifics will differ depending on the package manager used by the distribution).