UniForum NZ News
July 1997

UniForum NZ News is published by UniForum New Zealand, tel and fax 07-839-2082, after hours fax 07-839-2084, P.O. Box 585, Hamilton. Material may be reprinted, with acknowledgement, for non-commercial purposes. Views expressed in UniForum NZ News are not necessarily those of the editor or UniForum New Zealand. Volume 7 Number 6. c UniForum New Zealand.

President's Perspective: Bruce's Bandwidth

President Bruce Miller reviews the situation, vis a vis UniForum in the USA and in New Zealand:

There's good news and bad news, this month.

The bad news (for them mostly) is that UniForum US has found itself in a financial crisis after poor attendance at the show this past March. As Ray and I commented, the show was noticeably smaller than the previous year, a feeling emphasised within the rather gigantic scale of the Moscone conference centre, where the show has been held on several occasions in recent years. As a result Board elections have been postponed and many staff have left. The current plan is to move the base of operations to Washington State and occupy common quarters with the UNIX Users' Group in that area (WAUUG). "Oh, woe is me, the world has ended! ......"

The good news (for us and them) is that following the March conference which highlighted a shift in the fickle nature of the whizbang conference goer, UniForum US has chosen to focus its attention more closely on its own members and adopt a New Zealand model for its future activities. We would of course encourage UniForum US to continue an annual conference with a more member-oriented approach, more affordable and more residential in design. Carl Cargill praised our New Zealand conference in a recent article, though he questioned its ability to transport well into a US arena. I look forward to helping UniForum US prove him wrong. I don't think he'll mind!

What has this to do with us? The transition in UniForum US is of interest to us here in New Zealand on a number of levels. I just happen to have a few thoughts about this. Initially we might be concerned about the demise of our parent organisation. This would be wrong. UniForum internationally was first and foremost a representation of associated user groups. It still is, we just may have missed it in the glare of the Big Show. A concentrated, co-ordinated emphasis at the international level on issues and programs for users in their own area will provide better support for the ongoing, everyday users o= f open standards and better access to this expertise for those just getting into the act. We, here in NZ, have an opportunity to influence this evolution.

New Zealand members will have already noticed that the printed magazine was discontinued and is now available only on the Web. This was the major benefit of the international membership surcharge some of you have paid with your NZ subscription. The ne= w organisation provides an opportunity for us to help forge co-operation of re= al value to ALL members. In the mean time we will be making refunds to those who paid the extra.

Where to from here? The reality of the situation, which I have alluded to before, is that this makes very little difference to us here in NZ. We have lots going on. At the June Board meeting we scrutinised our 1997 conference and began planning for UniForum NZ '98.

Regional meetings are to be a priority item. Immediate attention to an Auckland schedule is being managed by new board member, Ian Soffe. Send him an email to make sure you=92re on his list for announcements of dates, times and subjects. Better still, send him your offer of a presentation, a venue, an idea for a good event.

Roger de Salis is wearing two hats, convening the conference again and putting in the ground work for a new role as membership secretary. Brenda Lobb has been co-opted to the board (again) and will continue producing UniForum NZ News. Ray Brownrigg continues to take responsibility for the library, as well as organising the Wellington regional meetings. Kaye Batchelor continues managing the bookshop.

The Web will be an increasingly important medium for our organisation. Anand has been maintaining the site thus far. He is looking for additional help to increase level and frequency of updates. UniForum NZ News will be published here as well as in print. Look for the bookshop and the library to be more comprehensively presented via the Website.

Who Gets The Wine ???

The following members receive a bottle of wine for returning the conference survey forms. Only three bottles are awarded here, as we have not received surveys 75 & 100 . . . yet. If you get yours in by the end of July l extend the wine offer till then.

And the winners are........

That's all for now, feel free to continue to clog my machine with email.

More Wine

Missed out on the above liquid prizes? Party plans ruined? Never mind, here's another chance: the person recruiting the highest number of new members to UniForum New Zealand in the next three months (till the end of September) will receive a bottle of wine and a mystery prize, donated by The Seabrook Group Ltd. Make sure that the new recruits identify you clearly on their membership forms, giving a postal address. There is a form in this issue of News, cut it out and photocopy it and go twist a few arms in your organisation! Better still, tell them what a beaut. society this is, and such good value, too . . .

Corporate Sponsors' Plaques

To testify to our gratitude for the ongoing support of our many corporate sponsors, UniForum NZ has once again ordered a handsome framed certficate for each sponsor. These, celebrating the 1997-1998 year, will be on their way to sponsors as soon as possible - with our thanks.

Microsoft's "Scalability Day": A Critique

Microsoft used its "Scalability Day" to launch a marketing campaign to show that Microsoft is serious about the enterprise and that large-scale applications can be attacked with commodity hardware. Here's a useful, if slightly biassed, comment compiled by assorted Sun engineers around the world:

By and large, the trade analysts' reaction to the Scalability Day demonstrations has been luke-warm. Nevertheless, it's important that the details don't get forgotten so that Microsoft succeeds in sending the message of "maybe the fine print is missing, but Microsoft did show the ability to run very large OLTP problems and support very large data warehouses".

Fortuntately, Microsoft did such a poor job on both of these, technically speaking, that we have a good opportunity to illustrate the very serious lack of understanding the Wintel world has of scalability.

The aim of this article is to provide some simple technical analysis to deflate both of the claims. There are two main two points.

Microsoft did NOT use industry-standard benchmarks (e.g., TPC-C,D) or even vendor-standard benchmarks (e.g., SAP, Oracle Financials). Why not? What do they have to hide? The industry-standard benchmarks are audited, report total cost of ownership and scalability. Without them, customers should summarily reject all Microsoft claims.

Microsoft should certainly know better. Their leading proponent is Jim Gray who in 1984 authored the celebrated "Anon et al." paper calling for standardized industry benchmarks. Here is a great quote from Jim Gray'sbook, "The Benchmark Handbook for Database and Transaction Processing Systems" (Morgan-Kaufmann, 1993, page 8):

  "'Benchmarketing' is a   variation of  benchmark wars.  For   each
  system there   is a benchmark that  rates  that  system  the best.
  Typically,  the  vendor's marketing   organization  defines such a
  domain-specific  benchmark,   highlighting the  strengths   of the
  product and hiding its weaknesses. The marketing organization then
  promotes the benchmark as a standard, often without disclosing the
  details of the benchmark.  Benchmarketing leads to a proliferation
  of benchmarks and  creates  confusion. Ultimately,  benchmarketing
  leads to universal skepticism of all benchmark results."

Kind of sounds like what Microsoft is doing now, eh? Trying to create confusion and skepticism, because this benefits Microsoft. Shame on Jim.

1) Scalability Demo: A Billion Transactions per Day

This was the most publicized demo (including splashy ads in the Journal and elsewhere. Jim Gray has been using it as part of his road show, too). Basically, Microsoft demonstrated a collection of 20 4-way 200MHz Compaq Pentium Pro systems that delivered an aggregate sustained rate of one billion debit-credit transactions per day. The system fundamentals were (see http://www.microsoft.com/backoffice/scalability). Very importantly, each load generator was paired with each database machine. 85% of the queries from that generator went to the corresponding database server. The remaining 15% was directed to a Transaction Manager which routed the query to one of the other 19 database servers. A uniform load was impressed upon the system by the load generators [i.e., this is a TPC-A/TPC-B setup).

This demonstration was extraordinarily naive. Here are some very simple observations about the "benchmark":

A) THIS IS NOT A CLUSTER. It is a set of 45 machines hooked onto a switched ethernet. Each server runs its own O/S and its own copy of the database, and the load generators are completely independent machines. While the demo does advertise a single console for "control", there are still 20 separate systems and databases to manage. Imagine something as simple as backup or restore.

This also means that it is essentially impossible to create anything but the most simplistic queries for the system. For example, try to do a join of two sets of tables distributed across the system. You can't because it isn't a single database image. It's 20 of them. The performance numbers are very little more than multiplying the performance of a single machine times 20.

Contrast the requirements here versus those of TPC-D, which the Microsoft set-up would be completely incapable of executing. There are 17 queries and two update functions that are fully specified in the TPC-D standard. Any one of those is an excellent example of a query that would need to be parallelized across the type of configuration that Microsoft used, assuming that SQL server were capable of such work.

Doing DSS joins that are parallelized across multiple nodes of a distributed architecture requires sophisticated distributed coordination. This can be done explicitly through message passing, a la the Informix XPS architecture, or via lower-level methods, such as are used by Oracle OPS. As far as we know, SQL server is capable of neither. Certainly, we have not yet seen a multi-node TPC-D result.

A final note here, Microsoft is counter-claiming that we don't understand clustering versus scalability (see: http://www.microsoft.com/ntserver/info/suncluster.html)

They FUD up everything by saying that they have delivered "distributed scalability" (whatever that means). Again, they are claiming that they can hook a bunch of systems up to a network and have some transaction manager re-direct simple transactions. If they really have distributed scalability, they would use industry standard benchmarks. Why are their TPC-C's only on small SMP systems and not on their distributed platform?

B) THE PEFORMANCE IS AWFUL. A billion transactions per day sounds enormous, but Microsoft should be doing much, much better given the hardware they have thrown at it. Do the following bit of maths:

   1,000,000,000 transactions/day
   ------------------------------    =   11,600 transactions/sec
   24 hours/day * 3600 seconds/hour
(Microsoft confirms this number in their web page. Calling it 11,000 tps.)

So, per server this is:

   11,600 transactions/sec
   -----------------------  = 580 tps/server
          20 Servers
For the simple debit-credit benchmark this is really, really bad. Of course, such benchmarks were obsoleted a few years ago by the TPC (the now-defunct TPC-A & B). Way back then we were posting numbers in the 2,500 range for an SC2000. Today, we guess a single Sun ES6000 ought to be able to deliver around 15,000 tps. That is, we could probably run the whole benchmark on a single Sun server!

C) THE DEMONSTRATION ITSELF IS SERIOUSLY FLAWED. The problem setup is unbelievably contrived and there is no sense at all that "scalability" was demonstrated.

Did they keep everything in memory, the old debit-credit trick? Did they audit the benchmark results? Of course not - no auditor worth their salt would have anything to do with debit-credit. Debit-Credit was exactly the reason why the TPC was founded almost 10 years ago. Jim Gray was involved in the creation of TPC-A (which lead to its cousin, TPC-B). He spoke forcefully against debit-credit and in favor of the open, well-specified, openly reviewed, and audited approached that the TPC benchmarks require. Note also that debit-credit is so old that TPC-A and TPC-B are now dead also; the TPC killed them years ago.

If Microsoft wants to show off their scalable OLTP performance,

let them do it the same way all other modern-day vendors do: with the industry-standard TPC-C benchmark. Here is another quote from Jim Gray ( "The Benchmark Handbook", 1993, page 9) on the subject:

  "Many believe  that the TPC-A and TPC-B  transaction profiles  are too
  simple to capture the way computers are used in the 1990s. They want a
  benchmark that has simple read  transactions, and a complex data entry
  transaction. They  also want a  benchmark with  realistic input-output
  formatting and with some data  errors so that transactions abort. [The
  TPC-C benchmark] meets these requirements .  . . and supplant[s] TPC-A
  and TPC-B"
So, by Jim's own admission, Microsoft is not measuring computers the way they are being used in the 1990s! This is about right - the sophistication of the demonstration is definitely circa 1985.

Observe, further:

Scalability means that a product can dynamically handle varying degrees of work, and that when a system is "maxed out", the customer can readily add H/W (CPU's, memory, and I/O), so that the system continues to behave well. Various vendors have amply demonstrated scalability, to as high as 64 processors. If scalability is achieved by the addition of nodes, rather than processors to nodes, then all the difficulties of data partitioning we've talked about for DSS come into play. For example, if Microsoft had used 21 nodes rather than 20, they would have had to move about a 20th of the existing data onto the new node.

In summary, Microsoft really only proved that they know very little about how to build parallel databases (they didn't build one) and almost nothing about the real world operational concerns of a bank: system and database management, load distribution and what kinds of transactions are important, to name a few.

Even if you give them a lot of credit and extrapolate to a time when they could actually address all of the above (say, in a few years), at best, they will have re-created an SP2. What is different about Microsoft, NT, etc. that makes them think they will have great success over those hard applications problems where the SP2 has failed miserably? Nothing at all. There is no fundamental innovation in NT or Wolfpack, or Pentiums or ServerNet interconnects or anything. If anyone should be able to make the model work it's IBM, but they have clearly met with limited success.

2) Scalability Demo: Terabyte Database

This simple image-retrieval benchmark is a DEC Alpha system that allows Internet access of a large (Terabyte-sized) collection of satellite imagery. While the benchmark has nothing to do with data warehousing or data mining, there is no doubt that Microsoft would be happy if people didn't understand that and were left with the idea that NT can handle terabyte databases. It turns out that the database they implemented was actual fairly small in terms of records. It's just that the records refered to some large datasets (images) which inflated the byte count to the terabyte range.

The system fundamentals were:

The database comprised 50 million records encoding a tile of a 32-km square patch of the earth. These were "meta-records" in that they described the tile with the actual image being stored elsewhere. The query against the database was trivial - "give me the data at certain resolution for a given part of the earth." The images were stored at three resolutions. The idea was that a user could request an image via a 28.8 modem connection and get a response with 10 seconds.

Really the only thing the demo showed was that it is possible to get the connectivity of a Terabyte worth of storage. That's it. This is like asking how much lead can you put in an aeroplane without worrying about whether it will fly.

There are two key issues, here. Firstly, this was NOT a data warehouse. There is no sense that the database can support any kind of query other than a very simple read transaction. And in any case, the server is way, way too small to handle a data warehouse of this size with any appreciable processing. This is why we have TPC-Ds - to measure the performance of warehouses, not lots of disks.

If a vendor in 1997 wishes to demonstrate high DSS data volumes and excellent query parallelizability, then an industry-standard benchmark exists. Vendors who choose not to run it always find some seemingly compelling reason not to, but in the end, they really just want to take the easy road out. Data warehouses are large, complicated and require a lot of good technology in order to scale well, perform well, and be manageable. How much easier just to throw a configuration together (and avoid an independent audit). A lot of companies have taken the high road and shown DSS performance legitimately, among them Compaq, HP, IBM, Informix, NCR/Teradata, Oracle, Pyramid, and SUN.

Secondly, it is actually a SMALL database! Fifty million records is not a huge database. What gets them in the terabyte range is using visual images. Each image is about 20Kbytes, or at least 10 times the size you would expect for most warehouse applications. It would be surprising if the actual metadata records were much more than 0.5kBytes. 50 million records x 500 Bytes/rec =3D 25 Gigabytes of database - meaning this is a very simple 25GB database.

Contrast that with how the "terabyte" is achieved: 50 million tiles x 20KBytes/tile =3D 1TB of on-line image data.

A key observation: the queries are not made against the image data, only the metadata records. For example, the database does not support a query to return a list of tiles that have some percentage of water indicated.

Further, with 28.8K modem connections, the amount of I/O pressure that can be put on the database is trivial. Even 1000 simultaneous downloads demands a whimpy 3.5 Megabytes/sec of sustained I/O. We can acheive this easily from a single disk. An E10000 running real decision support queries will sustain about 500 times this rate! (about 1.5GB/sec)

Except for demonstrating a visually-interesting internet application and the ability to physically connect a lot of disk to an Alpha Server, this in no way shows a large database application with complex queries, full-table scans, high sustained I/O rates, etc.

UniForum NZ '97 Conference Comment

"UniForum NZ '97 is the best way to share ideas and concepts. This is the best conference for sharing and learning that I have had the opportunity to attend."

UniForum NZ '98

Roger de Salis, that tiger for punishment, has accepted reappointment as conference convenor for our next conference. He's assembled the team and they're already hard at work on the next conference:

I really appreciate all the time taken by all UniForum NZ '97 conference participants to fill out the questionaires. One thread that came through strongly was that everyone was pretty happy with the location. The hotel did an excellent job with the food, general courteousness and taking care of the conference requirements.

I think that I personally would prefer that the conference does not become fixed at a single, permanent location, simply because I think potential participants who might be tempted to visit a new location might feel less enthusiastic because they have already been to Rotorua. I would welcome feedback on this point, directly to roger@newzealand.sun.com.

Apart from that, the remaining constant in the questionaires was that the various comments and suggestions that people had were always matched by opposite comments from others. It just goes to show that you cannot please all of the people all of the time!

One area the committee would like to improve on is the exhibition. There are many threads as to why the exhibition was very thin this year. Many companies that previously exhibited are now obsessed with NT, and are forgetting all their existing non-NT customers. Several companies simply want to spend their (limited) marketing dollars elsewhere. In others, the people who were most involved with UniForum NZ, and knew its value, have moved on and no-one else in the company has picked up the reins. There is a thread in the comments about more of the software side of the industry exhibiting. Innovative ways to do that is something we are certainly exploring.

Thanks again for all the very pleasing support at this year's conference. Here is hoping next year is even better.


UniForum NZ '98
Preliminary Call For Papers

Expressions of interest are invited for presentations at UniForum NZ '98, the 15th annual conference of UniForum New Zealand, to be held in May, 1998.

For refereed papers, abstracts are required by 20th August, 1997 and full papers by 20th September, 1997. For non-refereed papers, abstracts are required by 1st December. Notification of acceptance of papers will be given by 20th December, 1997.

For further information, contact Ray Brownrigg, UniForum NZ '98 Conference Program Coordinator fax 0-4-495-5118; email ray@isor.vuw.ac.nz


Why You Need To Know About Cryptography

Jenny Shearer, chair of the public policy committee of the Internet Society of New Zealand, urges us all to get informed about this vital issue:

Cryptography has come a long way from something we may have experimented with as kids, when you changed a few letters around so the kid at the next desk couldn't read your note. With the arrival of the Internet, cryptography use has become one of the important ethical issues of our time, and it is one that everyone needs to understand.

New Zealand, with some talent in the area of cryptosystems, is being held back from full participation in the early development of global commerce using "strong" cryptography, by a combination of a poor reading of archaic international agreements by the Ministry of Foreign Affairs and Trade, and an attitude of timidity or passivity by our politicians.

The cracking of the Enigma code, the famous German cryptosystem of World War 11, has set the public image of cryptography as something studied by learned scientists in concrete bunkers, something ordinary people don't need to concern themselves about. A government preoccupation. This viewpoint has become a dangerous one with the increasing amount of information about our individual lives which is being put on - line, and with the hugely increased ability of governments to carry out electronic surveillance.

The ability to chat to someone else in private, to send private mail, to have the security of medical and financial records, and carry on communications on matters of personal sensitivity or political importance, has largely been assumed in our society. That is mainly because of a lack of threat to most ordinary people in the area of privacy of communications. A wiretap gained by a court order, someone listening at the door, was about as far as it went. As a result, the right to privacy of communications is a largely unexamined right, and one which is being heavily challenged around the world as the Internet takes hold of large amounts of information.

Cryptography is the way people protect their information on the Internet, and it comes in many gradings of "difficulty" and types of cryptography. That is, a personal message to a friend may not require particularly strong cryptography, or none at all - it is not the end of the world if someone else gains access to it and reads it. However, if the transaction concerns a large amount of money or important political or commercially sensitive information, then the use of strong cryptography is essential.

"Strong" cryptography is now defined almost exclusively as public key encryption, a technique which involves a "public" key, a scrambling agent which everyone has access to and can send someone mail with, using their particular "public" key. The private key remains the secret of the person who holds it, and is used to "uscramble" the message sent using the the public key. Unless there is access to the private key, the code cannot be broken.

It is this "strong" cryptography that our government, following the lead of the United States, has the problem with. The Clinton Government is currently committed to the use of "key recovery" schemes, whereby private keys must be made available to the Government on request. The other scheme being discussed around the world is "key escrow" where the government holds private keys, presumaby "in trust." In fact, many citizens have good reason not to trust their government with the key to all of their personal on-line communications. The power taken from citizens and conferred on governments by the use of such schemes, is likely to change the very definition of being a human being in the electronic societies of the future. While this debate rages on in the United States legislature, export of "strong" cryptography is not permitted, except by some financial agencies such as banks.

Within New Zealand, as in the United States, use of "strong" cryptography is currently permitted. Indeed, Information Technology Minister Maurice Williamson has an excellent understanding of the importance of cryptography in protecting sensitive information, and the role that it will play in establishing the electonic commerce on the Internet. The commercial development in cyberspace is something that New Zealand is well-placed to exploit to the full. However, the Ministry of Foreign Affairs and Trade has judged that a defence agreement deriving from the 1950's (before faxes, modems and so on were developed) means that source code, which is textual description for what the computer is required to do, may not be exported in the case of public key cryptographic products. If, on the other hand, the export is carried out electronically, this old-style agreement does not cover it, so it is legal. Though one bank is setting up a payment strategy expoiting the loophole in the regulations, the loophole may readily be closed. The anomaly has created a barrier for New Zealand cryptography developers, who are missing out on opportunities to involve themselves in the development of major Internet commercial systems, which typically require a nicely shrink - wrapped "exported" product in order to make their transaction legally satisfactory. ( The Wassenaar Arrangement of Export Controls for Conventional Arms and Dual-Use Goods and Technologies is derived from the now defunct CoCom agreements which date from the Cold War. The Wassenaar Arrangement states that "national discretion" may be used in its application. There is no requirement for governments to impose controls on cryptographic items, and indeed many countries which are signatory to the aggreement have no export controls on cryptography, or have token controls which are not enforced. This is because the days when cryptography needed to be treated as a munition, or weapon of war, have passed).

In a letter to the Internet Society of New Zealand, Minister of Foreign Affairs and Trade Don McKinnon replies to concerns of the Internet Society:

"I do however have some difficulty with your assertion that "there is no requirement for governments to impose controls on cryptographic items". As a member of the Wassenaar Arrangement, New Zealand has accepted an obligation to screen exports of strategic goods, which Wassenaar countries have agreed includes cryptography. We do this by requiring an export license before strategic goods can be exported. This enables us to satisfy ourselves that exports of such goods are bona fide transactions intended for legitimate destinations."

It is the licensing process, that is, the levels of restriction employed by the Ministry which is of concern to the Society. Cryptography developer Peter Gutmann believes the law should be changed to allow products such as his "strong" encryption software "CryptLib", which is already being used by a number of organisations overseas, to be officially exported. The issue is made more important by the recent "cracking" of 56 bit DES encryption by a group of people working together on the Internet. The 56 bit DES system has been a standard allowed for export ( with a provisional key recovery scheme to be set up) by the United States, and until now has been thought an adequate security system.

The commercial implications are fairly obvious: if New Zealand can provide the best in unbreakable encryption products, while the United States battles over the issues in its legislature, New Zealand may stand to gain a useful foothold in global commerce. It would appear there is no reason not to remove this export control, and in doing so, New Zealand would be taking a global stand in favour of privacy of communications as of right. It would be agreeing with the general view of the Internet community that use of strong cryptography is critical to the use of the Internet as an open political and public forum, and an essential security measure for for the development of electronic financial transactions. Assurance of complete protection of the confidentiality of patient records are essential to bring medical systems on to the Net.

Though the movement in favour of similar relaxation of export controls in the United States is strong, the US legislators have raised other considerations to deal with. Their fears are that unbreakable cryptography use will create a threat to national security, if it is used by terrorists, drug dealers, and other criminals. Further, that confidential business transactions will assist in tax evasion, and that pornographers will be difficult to detect. Evidence indicates that their fears are largely groundless : most citizens traditional pay their taxes as a recogition that this is necessary to society. Traditional tracking methods through bank accounts and so on would still be effective. Further, criminals and terrorists already hide their communications effectively: wiretaps have been shown to be of limited use, and the fight against drug dealers traditionally focuses on many fields other than surveillance of communications. In the difficult field of pornography regulation, many Internet initiatives are underway, and in New Zealand the detection of paedophiles indicates that they have a lack of sophistication in the use of the Internet. Further they have to a point, to "go public" in order to contact each other.

However, it is clear that the United States is intending to pick its way carefully through the issues and the implications of use of strong cryptography being released through US computers and products globally. As the world's chief superpower which is responsible for keeping the peace, a conservative approach about possible aggressors may well be appropriate. The world has much to lose of the defence systems of the United States are compromised.

However, for New Zealand to follow the lead of the United States in this area, for no particular reason except that if the US isn't doing it we shouldn't either, is an insult to the intelligence of New Zealanders and the cryptographic community. On this important ethical issue, New Zealanders should be saying yes to open use of cryptography, and the opportunities for freedom of speech and protected global commerce that come with it. The general community needs to start looking at its responsibilities in this important area, and in others, such as international copyright regulation, which could also cause major problems to the Internet structure.

The current debate about gambling on the Internet, and whether New Zealand should take up opportunities to run Internet casinos, is another issue of the new global ethical viewpoint which is being developed by the Internet community. Should New Zealanders, as responsible global citizens, be setting up organisations which will be likely to target the poorer and less responsible citizens of other nations? Is this an example our nation should be setting to the rest of the world?

In developing a new role for itself within the global Internet community, New Zealand needs its own people to look hard at developing a position which represents the ideals and aspirations of New Zealanders, on all of these matters.

UniForum NZ '97 Another Conference Comment

"The conference is a vehicle for contact with fellow practitioners and vendors. The value of this is *OUTSTANDING* at UniForum NZ."

Keeping Up With The Industry

Some acronyms you may need to know:

PCMCIA    People Can't Memorize Computer Industry Acronyms
ISDN      It Still Does Nothing
APPLE     Arrogance Produces Profit-Losing Entity
SCSI      System Can't See It
DOS       Defective Operating System
BASIC     Bill's Attempt to Seize Industry Control
IBM       I Blame Microsoft
DEC       Do Expect Cuts
CD-ROM    Consumer Device, Rendered Obsolete in Months
OS/2      Obsolete Soon, Too.
WWW       World Wide Wait
MACINTOSH Most Applications Crash; If Not, The Operating System Hangs
PENTIUM   Produces Erroneous Numbers Through Incorrect Understanding of
          Mathematics
COBOL     Completely Obsolete Business Oriented Language
AMIGA     A Merely Insignificant Game Addiction
LISP      Lots of Infuriating & Silly Parenthesis
MIPS      Meaningless Indication of Processor Speed
WINDOWS   Will Install Needless Data On Whole System
GIRO      Garbage In Rubbish Out
MICROSOFT Most Intelligent Customers Realize Our Software Only Fools
          Teenagers

Computerworld Excellence Awards

UniForum NZ members fill a range of jobs in many leading New Zealand companies, and often get involved in critical projects. Computerworld has announced a set of awards that celebrate the achievements of New Zealand business in using information technology for competitive advantage. The awards, to be presented at an event scheduled for February 1998, will cover a range of categories including overall excellence in the use if IT, most successful project implementation of the year, information systems manager of the year, technology innovator of the year, etc.

Individuals, teams and companies can be nominated for the awards, which provide an opportunity for recognition of outstanding effort and achievement. Contact Alec Brown of IDG if you want to nominate someone, or for further information. There'll be more about these awards in Computerworld as the year progresses, too.

UniForum NZ Library Update

UniForum NZ maintains an ever-expanding and up-to-date library of relevant journals, books and technical documents which has been built up over a number of years from subscriptions, newsletter exchange arrangements, free publications and individual donations. UniForum NZ Librarian, Ray Brownrigg, reports: Just a brief report this month, consisting of additions to our regular periodicals:

AUUGN V18:2                     May 2019
BYTE V22:7                      Jul 2019
Communications                  May, Jun 1997
Network World Issue 20          Jun/Jul 1997
SunExpert V8:4                  Apr 2019
SysAdmin V6:5,6,7               May, Jun, Jul 1997
Unix Review V15:5,7             May, Jun 1997
Unix Review V15:6               1997 Buyer's Guide
Click here for a complete list of library holdings. One small note, SunExpert now includes RS/Magazine. To order publications to take out on loan, contact Ray at ISOR, Victoria University, PO Box 600, Wellington. Phone 0-4-472-1000 X 7068; fax 0-4-495-5118; email ray@isor.vuw.ac.nz

$ $ $ Money Available $ $ $

The UniForum NZ Research Fund has money available to support research projects, both practical and academic, of interest to UniForum NZ members. Specially suitable for student projects.

Contact Anand Raman Tel. 06 350 4186 for further details.

Linux Lines

Bill Parkin talks about the learning curve with Linux at home: Our home *nix setup grew from a Bell Technologies system V rel 3.2 on a little '386 with no ethernet, serving a terminal and a '286 running kermit on RS232. The current incarnation is an ethernet network with all Linux systems. The heart is a process/file server with a CD Rom changer, also in the basement is the gateway system. Upstairs are a pair of machines as our access systems, and the print server. At present there is a development system attached also.

So what does it all do for us? Central has been the learning and modeling aspect. Also required were email and usenet news. We were also wanting ordinary text work. I've wanted a household control system - our first micro-computers had this use in the spec.

Early on I worked out that the unix sense of time with cron was required for my vision. A fairly early piece of hardware was a silicon switch - an opto-isolated triac in a mains power 'cable'. Control is from the DTR line of an RS232 port. For us, this gives us a sunny dawn every weekday, plus notice when its time to move.

In the first incarnation the uucp component cu handled the line but more recently I've created a tool to handle the switching with parameters for which line (still only one now) and how long. Cron still handles the start up decision.The accuracy of the machine time is important when the bus will have gone if the machine has become too slow and the signal to leave was late.

I also wanted good time for synchronising logs at work and so I learnt about NTP - network time protocol. Unfortunately, NTP distributes accurate time but a clock supplying this is needed. While the long story is available, the short is that we have a GPS receiver now, feeding NMEA time sequences. While NTP properly configured with a good clock can easily provide accuracy better than one tenth of a second, and distribute this to all local machines, I made a poor choice of receiver and can only rely on better than six-tenths of second. I don't worry about missing a bus for this reason.

Limitations in Linux as it runs for me mean there is variation in the machine time so that access to a reference clock is needed. The system has been growing in disk capacity with each transition. The CP/M system used 1.2MB inch floppy drives, the 286 had first one then two 20MB drives, the 386 Bell Technologies had first one then two 40MB drives on the Oliveti. After destroying the second news drive we switched to a newer machine with IDE rather then MFM and went to a 240MB drive still with Bell Unix. The CP/M disks were transfered in about now and the QIC backups meant the system did have continuity. We were hurting for TCP/IP into the unix system when Linux passed the critical point for us.

The first Linux system for us was an early Yggdrasil (Norse - tree of life) release. I seem to recall that that one didn't see live use but I then subscribed to a year's subscription (four issues) from them.

That was the best value I've ever had. They kept missing deadlines for releases and supplied sweeteners of archive distributions. This kept up for about two years with the last supplied distribution still working for us on a couple of machines. We weren't able to mount the unix file systems onto the Linux system but the backup was able to be brought in. More recently we decommissioned the 286. The last backup for it happened by offering it as an NFS server and sucking it into what is now a 2+GB drive. The QIC tape drive is not active now and instead a 2GB DAT drive does backups.

Ah, yes, backups. We have a few tapes and a weekly backup. A script runs under cron and a tape is always in the drive. Much of the system will be rebuilt from CD if needed and then the tape overlaid onto that. Late last year a clone of the heart of the system was done as a christmas present for friends, with variations, of course. That helped my confidence in the backup policy. The Amanda package that was discussed in a recent Wellington regional meeting is interesting.

Distributions - so we started with Linux on a Yggdrasil distribution, I've tried out Red Hat and one machine is still running with that but the recent installations have been of Debian. More recently the http system has been set up to provide cache and proxy service from the gateway, along with the news and ftp proxy.

The news feed has been over uucp from the beginning. When dial on demand PPP was started, uucp was sent over tcp. Most recently uucp service has been becoming more demanding of the ISP and we have moved to the 'suck' package to manage our feed. uucp is still used for mail - I still have to learn a practical solution for this. Subject to requests, I'll talk about some of the packages we use next time.

Bookshop

We have totally revamped our book selection - there are books here you need to look at! Check the all-new list below for super deals available to UniForum NZ members. All prices, in NZ dollars and including GST, packing and postage, are below the RRP and just cover our costs. The books listed here are taken from The O'Reilly catalogue. IDG & Microsoft titles are also available to UniForum NZ members at a discount. Contact Kaye Batchelor, tel. 04 4950561 for information on these books. Click here to view the complete bookshop holdings.

UniForum NZ Corporate Sponsors

Amdahl International Corporation NZ Office
Digital Equipment Corporation Ltd
EDS (NZ) Ltd
Hewlett-Packard (NZ) Ltd
IBM New Zealand Ltd
Informix Software Ltd
Open Systems Specialists Ltd
Optimation (NZ) Ltd
Oracle New Zealand Ltd
Sun Microsystems NZ Ltd
TGE Ltd
The Seabrook Group Ltd

UniForum NZ Board 1997 - 1998

Officers

Board

Business Office

UniForum NZ
P.O. Box 585
Hamilton

UniForum NZ News - Editor: Brenda Lobb
44 Seabrook Ave
Auckland 7
Tel & Fax 09-827-1679

All Board members can be contacted via Email at Firstname.Lastname@UniForum.org.nz


UniForum New Zealand

UniForum New Zealand is a non-profit society for the purpose of:

The group conducts an annual conference plus regular seminars around New Zealand, holds a library of overseas publications, runs an email directory and a bookshop for use by members, and produces this newsletter monthly (11 issues per year).

To Join UniForum New Zealand:

UniForum New Zealand welcomes new members. If you would like to join, fill out the form below, and post with your joining fee (indicated below) to the Business Office, UniForum New Zealand, P.O. Box 585, Hamilton.

Fees, including GST, for the period 1 April '96 to 31 March '97 are:

Individual Member, $85.00
Student Member (for full-time students with ID), $45.00
Corporate Member, $425.00
Corporate Sponsorship, $990.00
Joint UniForum NZ/UniForum Inc. Member, $150.00


Name:
Address:
Occupation:
Telephone:
Fax:
email address:

Total Enclosed: $