UniForum NZ News
January/February 1997 Volume 7 Number 1. © UniForum New Zealand, Editor: Brenda Lobb.
The veil is slowly being drawn from the mysterious qualities of the UniForumite. Forty-six souls have so far stepped boldly forward to claim membership in more detail than has been available before. Two of these have also received a bottle of wine for their promptness and honesty.
Yes, the UniForum NZ survey continues. If you haven't sent yours in, you have a second chance: there's another copy of the survey enclosed with this issue of News. There are still four bottles of wine (or non-alcoholic alternative) to be distributed.
While the rest of us have been lounging around the beaches waiting for the next wave or the next cyclone, the open systems of the world have plunged onwards with nary a pause. Telecom now offers me international phone calls via the internet. I spoke with some quasi-legitimate hackers who have pieced together their own service splicing cell phones onto the loop from their cars. IBM's announcement of the shoe computer has prompted a flurry of holiday jokes about the significance of secret handshakes. MMX blew in suddenly from the wings to unnerve buyers of previous Pentium PCs. The NetComputer is creeping from the cave with an as yet over-hyped and thickly veiled future ahead of it.
Clearly the New Year will bring more of the unexpected, the unknown and the unannounced. Hardware will emerge from the dreams/fears of last year. Software will appear from the hype surrounding the well-equipped and connected desktop. The vision of dazzling bandwidth will continue for most of us to be a bright spot which ends somewhere beyond the end of our street. You and I, the users and customers of the world, will continue to do what we want, when we can afford it, and in the process confound virtually all of the plans of vendors, developers and world governments. Keep up the good work!
Bruce Miller went to Interop recently:
The Interop has risen precipitously in stature since its early days in Silicon Valley. I wasn't there then, but Andy Linton was. His tales of his participation as one of the behind-the-scenes techno-team members at an early Interop fascinated me over several bottles of wine at the 1992 conference.
UniForum NZ News is published by UniForum New Zealand, tel and fax 07-839-2082, after hours fax 07-839-2084, P.O. Box 585, Hamilton. Material may be reprinted, with acknowledgement, for non-commercial purposes. Views expressed in UniForum NZ News are not necessarily those of the editor or UniForum New Zealand.
These tales have been tantalisingly tugging at the interior adventure lobe of my brain, and last November, I had the opportunity to see at first hand how the Interop has evolved.
Before I even got to the conference, I spent some time with Julie Jones & Rolf Jester, amazed that people still collected real books in quantities worthy of being called libraries. They also have a routine which includes home-based fax and e-mail which typifies the Interop at the personal, everyday level.
The Interop conference itself is not, on the face of it, much different from 50 other computer-oriented conferences. There are workshops, tutorials, keynotes, streams, debates and an exhibition hall. This is the first time Interop has come down under though. Softbank has bought out the Interop Conference concept and now have a year-round series of conferences in Europe, North America and Asia.
The distinguishing feature of the Interop is the core team of techno-gurus who design, test and implement a huge network infrastructure on which the exhibition is based. Only a couple of people are full time Softbank staff. Another 10 are "donated" by their companies or universities for periods of a year or so. Two sets of network equipment tour on the circuit. The teams follow the schedule, arriving to meet the equipment containers four or five days before a conference. They also meet a local team of as many of 40 who assist in stringing cable, connecting routers and testing the whole mess.
Membership in the volunteer brigades is by invitation. You need to know networks. The usual satisfaction provided by sleepless nights, pizza and long periods of alternating anxiety and exhilaration is about all you can expect in return. Needless to say there is no shortage of volunteers!
The concept of the network itself is to provide a test bed for the latest cutting edge of connectivity that can be made to work. And indeed sometimes "making it work" is still happening as the exhibition is assembled around them. The new tech this year involved ATM in several different versions. Fast Ethernet is accepted in an almost passe manner. ISDN is now integrated as a variable outside link.
The best view of the network is from the "control room", a glass-walled area, constantly staffed, before an array of monitors displaying diagnostics which are familiar, though perhaps not on such a scale. Each hour one lucky team member attempts to explain the net to an audience of 20-25. Questions betray the vast range of network experience represented in the conference delegates and make it very difficult for the tour to proceed at other than a basic level, with some molecular level jargon for the initiated. Given that the
conference is assumed to attract people with some existing expertise related to IT, the gap in understanding of current network connectivity issues points to difficulties in both implementing the infrastructure and taking advantage of its benefits.
As to the conference itself, I loved the technical underbelly. The presentations themselves were a bit disappointing. I felt that the presentations were unnecessarily basic, kept to such a simplistic pitch that at the end I couldn't find anything to ask questions about. On top of that, the subjects were for the most part the same things we had at our own conference last year. I spoke to a number of the presenters and they all agreed, explaining that this was at the express request of the conference organisers, in an apparent belief that it's better to NOT offend a beginner than to satisfy the experienced.
I disagree. Interop has always in my mind represented a level of technical cutting edge experimentation for those involved in making the cutting edge decisions in the market as developers and users. They seem to be losing the focus I thought they had. Perhaps I no longer represent the target audience. I find it sad to see this event playing to a lowest common denominator rather than pulling the debate up to a higher level.
What are your thoughts ?????
[Perhaps the insistence on a simple pitch is the
unfortunate result of today's trend towards running conferences as
a profit-making business, necessitating
ever-increasing audiences, rather than as a non-profit service to members
of the technical community. Perhaps the long-running
success of UniForum NZ conferences has something to do with
our refusal to follow this trend! - Ed.]
Plan to get to UniForum NZ '97 - it's shaping up to be just as good as ever!
The programme is looking very compelling as top local and international speakers confirm their attendance. Dr Bob Glass, Sun Microsystems' futurist and corporate strategist who gave a vastly entertaining and thought-provoking opening keynote speech at UniForum NZ '95, will return to speak at this year's conference. Carl Cargill of Netscape is threatening to return, too; this standards strategist, author and stirrer par excellence is guaranteed to let off metaphorical and quite possibly actual fireworks at the conference!
Also coming from the USA are Brent Callaghan, an ex-Aucklander now a staff engineer in the Solaris Networking Group at Sun Microsystems in Mountain View, California, and Bill Cheswick of AT&T Bell Laboratories. Ches has worked on network security, PC viruses, mailers, the Plan9 operating system and kernel hacking, and has toured the world officially for the purpose of giving security talks, but, the whisper goes, really in search of the perfect mint imperial. A co-author of "Firewalls and Interet Security; Repelling the Wily Hacker", he is currently working on commercial munitions for the Internet: cryptographic tools to make the Internet safer.
There's a substantial contingent crossing the ditch to speak, too, including David Purdue, Technical Account Manager for SunSoft, covering Australia, New Zealand and Asia South, and the Secretary of AUUG Incorporated, and also a member of the board of ISOC-AU; Mark White, currently UNIX Product Manager for Tandem Computers in Australia; Rolf Jester and Julie Jones of Greenstone Solutions Pty Ltd; and Jan Newmarch from Canberra University.
Added to these there's a strong line-up of local speakers, a wide selection of useful tutorials and of course our usual wonderful social programme (featuring masses of excellent food and wine, included in your registration) which ensures maximum opportunities for that essential "networking".
Watch for your registration brochure with the next issue of News.
Roger de Salis shares some inside knowledge:
The following was handed to me by a Sun Engineer. Readers of this illustrious journal might like to enter an informal competition for a small prize to guess which one. If you have never met a Sun engineer, then......
I actually thought the amazon site was pretty cool. My own favourite site of the month is
as my bomb is on its last legs. The uncharitable amongst you may be forgiven for thinking that it is all champagne and roses working for a vendor.
Computer Web Site of the month:
This is a cool java development site, with many free applets to download and try out. The most impressive application is the "Cooper and Peters" Word/Excel look-alike application. Previous access to this site provided several gif files, but my last access provided a real live demo of a working application.
Search for "Cooper" on the www.gamelan.com site. The other thing I look at significantly often is "Integrated Development Environments" for useful Java tools.
Picked off the 'net:
Tom Livingstone, not a Sun man, nevertheless highlights the true significance of Java:
Late last year I was interviewed by Stephen Bell for Computerworld on where Unix has come from and where it is going. During the interview I used the term "locationally fluid" to describe way I foresaw software evolving. He apparently liked the term so much that he quoted it in his article, otherwise I would have forgotten ever using it. In this article I intend to explain what I mean by "locationally fluid" and why it is a significant concept.
About twelve years ago, when I worked in the University and read research journals, I came across an interesting article in an H-P research journal. It described an experiment that researchers had conducted where they attempted to harness the behaviour of worms (the software, rather than garden, variety) to utilise the unused computing resources in their network of workstations at their research lab. These workstations (numbering around 1000) were almost completely unused during the night when all the scientists had gone home. The research project consisted of developing a variety of worms which propagated themselves through the network onto computers which were inactive. Once on the computer, they would perform some shared task until they detected that the rightful owner had returned at which point the worms would nobly suicide in order to release the computing resources they had grabbed. I remember that there were three kinds of worm, of which I can remember the "existential", whose sole function was to exist, and (I believe) the "stochastic" whose purpose I cannot remember. Anyway, I found the concepts behind the research intriguing (and the names of the worms cute).
Some years later, at a UniForum NZ exhibition in Wellington, IBM were proudly presenting a practical demonstration of their newly developed DCE software. This was at the time when DCE was the hot new thing and promised great things for distributed computing. However, it doesn't appear to have taken off to any great extent, though much of the technology and principles have been coopted by distributed object implementations. Anyway, IBM had two RS/6000s and two OS/2 computers networked together. All computers were contributing to generating animage using a technique called "ray-tracing". The master computer was responsible for farming out a piece of the overall job to remote computers, and reassembling the results into a single image. The remote computers ran in a loop requesting some work from the master computer, performing the work, returning the results to the master, and requesting the next lot of work. While this was cute, the really impressive part of it was when they pulled the plug on one of the remote computers while it was processing part of the image - nothing happened; no errors were reported, the application didn't fall over, the data weren't corrupted. Basically when the remote computer failed to report in with its portion of the job the master simply reallocated it to another computer. And when power was restored and the remote computer announced it was available again for work, the master gave it the next chunk of the task to work on.
What this demonstrated was two-fold: that for certain categories of application, it was possible to connect computers of different types to form a larger virtual computer; that using multiple computers could provide a high degree of fault tolerance in applications. "This is great!" I thought at the time. Then I looked at what was involved in writing DCE applications, and my interest rapidly waned.
Ray-tracing is an example of an application that benefits from parallel processing technology. In ray-tracing each point on a display is rendered by modelling its path from the original light source. It creates the most realistic images but is horrendously slow. But the most important characteristic of ray tracing is that each point of light can be modelled independently of all other points of light. So you can farm the calculations for each point of light (or a complete scan line in the case of the IBM demo) out to separate processors which can work on their allocated points independently from the others. This parallelism was utilised in "Toy Story" where the rendering of each frame was done on a bank of 117 Sun SPARC 20s.
Of course there are many applications which benefit from parallelism. The RSA public key encryption system was developed in 1977. It is based on multiplying two large prime numbers together to create a very large number as the public key. The original key length was 129 digits(RSA-129). So confident in the security of this length were they that the creators encrypted a secret message and offered $100 to the first person to decode the message. At that time it was estimated that running time on a computer to break the key by brute force would be "40 quadrillion years!" In 1993 an international team of volunteers led by researchers at Bellcore set out to crack the code. In April 1994 they did. They achieved this by partitioning the problem (factoring a large prime number) amongst upwards of 1600 computers, ranging from workstations through supercomputers, all working part-time on the problem.
In January this year I came across an article that stopped me in my tracks with its implications. It is the direct reason for my writing this. The article was titled "Create your own supercomputer with Java" published in the January 1997 issue of JavaWorld magazine. In this article Laurence Vanhelsuwe describes how to utilise Java and the Worldwide Web to enable vast numbers of computers to contribute their computing power to a common application. The article describes the implementation of a parallel processing scheme where a "WorkerApplet" is downloaded to the desktop and proceeds to request jobs from the central "JobMaster" application, process them, and return the results back to the server. This applet is invisible to the person using the desktop and continues as long as the Web browser is running. Each job only takes a short time (about 2 minutes - short enough to complete while a web page is being read) before returning its results to the server. And just to demonstrate its feasibility, while I had been reading the article, my computer had been coopted to assist in performing a shared task. And the task? Ray tracing! Just to put the icing on the cake, the author had included full Java source code of the application. The JobMaster is 18KB of source and the WorkerApplet is 9KB. The code implements a general client-server engine and communications protocol into which you can plug any suitable algorithm that you wish to have processed.
This is the first concrete example for me of "locationally fluid" computing; the creator of the application on the server has no idea where the work is being processed at any point in time. He doesn't even know how many computers are involved. On two successive runs of the application, the entire machine and geographic spread may be completely different. I believe this is the first application to demonstrate the real power and benefits of Java.
Now, consider the near future. Web-based environments are pervasive, both as intranets within an organisation, and as public facilities on the Worldwide Web. Advertising has failed because of its discretionary nature; like print advertising it can be bypassed and ignored. A large number of web-sites have adopted the "push" model, which effectively turns them into narrowcast TV/radio stations, but that market is limited because people's tolerance to force-fed advertising has been eroded by television.
As a source of revenue, popular web-sites turn to compute brokering. They utilise the algorithms and concepts described in the above article, to turn their web sites into parallel processing servers, with the computers of the people who visit their sites acting as the processing clients. This is all above board, and is a quid pro quo for providing "free" content, and is much less intrusive than advertising. Of course these big web sites don't use this vast resource for their own computations, they sell "computing time" to other companies who do have the need; this is the way they fund their web sites.
Extending this just a little, imagine a company which requires a lot of parallel computation. They have a computer which dispatches portions of the job to other computers to process. However these are not the "workhorse" computers described above, they are the web servers of content providers, multiple content providers. These servers then break the sub-jobs further and farm them out to the computers accessing their web pages. This is the ultimate expression of "locationally fluid" computing. It is also the destiny of Java; this is what Java was created for.
Of course, the benefits are not limited to the Worldwide web. A typical large business in NZ would have 100 computers on desktops, each with 16MB of memory and a processor with the power of a mainframe from five years ago, all mostly idle. These same businesses also have increasing needs for analytical processing and modelling, which just happen to benefit from parallel processing in similar ways to ray-tracing and prime number factoring. Soon these companies will be able harness the computing power on the desktop to provide a true corporate-wide computing resource and Bob Metcalfe's law (that the power of a network is proportional to the square of the number of devices connected to it) will apply even more strongly than it does today.
Chris Goodyer draws some interesting parallels:
The summer holidays offer opportunity for reflection, and I was recently musing over our new coalition government. It suddenly occurred to me that there were a number of parallels between the political turn of events and much of what we now take for granted in the open systems world. Perhaps we should not, then, be too pessimistic about how it may turn out?
For a start, we have seen the move from a proprietary, one-party government to one which has had to embrace a number of standards - notwithstanding that, being the first of its type, it has the opportunity to define some of those standards to its own benefit! But here we have an example of one vendor simply not being able to satisfy the demands of the majority of its users. We talk often today of choosing "best of breed" software or systems, which are then brought together by system integrators to address the task at hand. Has not the coalition perhaps brought together the best of various policies, with the party leaders acting to integrate these into a cohesive whole?
To take this idea further, think about the various consortia that are formed from time to time to tackle large systems projects. Here we have a consortium, or strategic alliance, of two parties, led by a prime minister (vendor) whose job it is to ensure that the various ministers (sub-contractors) do their job to meet the overall contract. There must be considerable doubt, however, as to whether they would be prepared to outsource some of the work to other parties!
Some of the features of open systems are expressed in the terms portability, scalability and interoperability. The coalition has certainly shown that standpoints taken before the election were easily able to be carried across to another platform, though the question remains whether some of the previously disparate views can be made to run effectively on that same platform!
As regards scalability, I have been amazed at how quickly the size of cabinet was able to be ramped up, despite pre-election claims by one of the "vendors" that their system should be able to function with a much smaller cabinet or "processor". Is this a case of RAMming home the advantage? We have yet to experience how good the new government will be in terms of interoperability, with the track records of some of its members not leading to much confidence.
When looking for new systems, one of the common methods is to advertise the situation to potential vendors with a request for proposal. Of course, in this case the new system was the chance to govern the country, and the various parties accordingly registered their interest, and presented their manifestos (proposals). What was presented pre-election tended to change post-election, especially in the negotiations to achieve a coalition, but I suppose that isn't too different from a consortium of vendors finding different ways to approach a problem, once none of them get the whole contract (with the unsuccessful vendors standing disgruntled on the sidelines, muttering dire warnings of how the winners' offerings are bound to fail). The negotiations in this instance, however, could scarcely have been conducted in less of an "open systems" manner, with the purchasers (voters) entirely excluded from the contractual phase.
While some of the clauses are now known, the extent of others will only be revealed in the coming years. If things go wrong, the two parties may well blame each other (consortium members), the new style of government (leading-edge technology) or the opposition (incompatible components), to limit their liability. Or perhaps they can find an out through force majeure, and ascribe it all to some higher being (Rob, are you up there?) It would be a fair bet, since we are dealing with politicians here, to say that the warranties are scarcely worth the paper they are written on. At least we have the protection of a 3-year term and automatic termination at that time!
So, it can't be all that bad, can it? And we, who have experienced the agonies and the ecstasies of life in the open systems environment, can watch how those in the house come to grips with their own new world.
Chris can be contacted by email.
The UniForum NZ Research Fund has money available to support research projects of interest to UniForum NZ members.
Contact Anand Raman, tel. 06 350 4186 or email
UniForum NZ maintains an ever-expanding and up-to-date library of relevant journals, books and technical documents which has been built up over a number of years from subscriptions, newsletter exchange arrangements, free publications and individual donations. Here UniForum NZ Librarian, Ray Brownrigg, offers an update:
We have received copies of free documents from Standards New Zealand. Each lists the relevant standards in a particular area. They are:
Also received is a copy of the 1996 SANS (System Administration, Networking and Security) System Administration and Security Salary Survey (SASSS?). In addition a flyer and Registration Form have been received for the SANS97 Annual Conference, April 21-26 1997, Baltimore.
I have some spare Registration Brochures for UniForum '97, March 12-14, San Francisco; if you are interested in going to this year's conference and trade show, let me know.
With the latest NZ ComputerWorld I received a sample copy of Management Technology Briefing V4:1 with an invitation for a free charter subscription. This I will apply for to see what it turns out like.
Finally for this report, the following regular periodicals have been received:
Keep those letters, faxes and emails coming!
To obtain a complete list of library holdings or order publications to take out on loan, contact Ray at ISOR, Victoria University, PO Box 600, Wellington. Phone 0-4-472-1000 x 2018; fax 0-4-495-5118 or email.
If you're having trouble keeping up with the latest releases from O'Reilly or finding an important book here in New Zealand, check out these services which are available to organisations with network connections.
If your company has a news feed, subscribe to biz.oreilly.announce to receive announcements of new releases from O'Reilly. O'Reilly books are available through UniForum New Zealand at a substantial discount to members.
Computer Literacy, based in San Jose, is one of the largest stockists of books on all aspects of computing and they now take orders by email. You do need to have a credit card number and signature on file before they will accept an email order. Send enquiries to firstname.lastname@example.org
The following is a review by Kaye Batchelor :
O'Reilly and Associates have just published the second edition of "DNS and Bind", a complete guide to the Internet's Domain Name System (DNS) and the Berkeley Internet Name Domain (BIND) software, the UNIX implementation of
DNS. The new edition also covers using DNS and BIND with Windows NT. It's a complete update of this classic Nutshell Handbook, which has served as "the" source of information for system administrators who manage domain or name servers.
DNS is the system that translates hostnames (like "rock.ora.com") into Internet Addresses (like 220.127.116.11). Until BIND was developed, name translation was based on a "host table"; if you were on the Internet, you got a table that listed all the systems connected to the Net and their addresses. As the Internet grew from hundreds to hundreds of thousands of systems, host tables became unworkable. DNS is a distributed database that solves the same problem effectively, allowing the Net to grow without constraints. Rather than having a central table that gets distributed to every system on the Net, it allows local administrators to assign their own hostnames and addresses and install these names in local database. This database is automatically distributed to other systems as names are needed.
In this new edition of "DNS and Bind", the authors describe Bind version 4.8.3, which is included in most vendor implementations today. In addition, readers will find complete coverage of Bind 4.9.4, which in all probability will be adopted as the new standard in the near future.
In addition to covering the basic motivation behind DNS and how to set up the BIND software, this book covers many more advanced topics, including using DNS and BIND on Windows NT systems; how to become a "parent" (ie "delegate" the ability to assign names to someone else); how to use DNS to set up mail forwarding correctly; debugging and troubleshooting; and programming.
One of the authors, Paul Albitz, is a software engineer at Hewlett-Packard. Paul worked on BIND for the HP-UX 7.0 and 8.0 releases. During this time Paul developed the tools used to run the hp.com domain.
Since then Paul has worked on networking HP's DesignJet plotter and on the fax subsystem of Hp's OfficeJet multifunction peripheral. Before joining HP, Paul was a systems administrator in the CS department of Purdue University. The other author, Cricket Liu, works for Hewlett-Packard, where he consults with HP customers on TCP/IP networking and UNIX, including network security and the Domain Name System.
"DNS and BIND" is available from the UniForum NZ Bookshop, for $85.50. Published December 1996, ISBN: 1-56592-236-0, 438 pages.
Members, don't forget the UniForum NZ Bookshop - check the super specials and new books below.
All prices, in NZ dollars and including GST, packing and postage, are below the RRP and just cover our costs. Mail orders to Kaye Batchelor, c/o EDS, PO Box 3647 Wellington. Enquiries to Kaye, phone 04-495-0561 or fax 04-474-5130.
Don't forget an order number for business orders, please.
Remember that when you join or rejoin UniForum NZ you can be in for a really good value deal: for a total payment of $150, UniForum NZ members can now become members of UniForum Inc. as well. As well as all our usual goodies, you get IT Solutions magazine every month; 3-4 technical publications per year; access to the UniForum Products Directory via Internet; email delivery of the UniNews fortnightly newsletter; and numerous other benefits like US conference discounts. This represents a huge saving over all previous deals - GO FOR IT!
UniForum NZ News - Editor: Brenda Lobb
44 Seabrook Ave
Tel & Fax 09-827-1679
All Board members can be contacted via Email at Firstname.Lastname@UniForum.org.nz
UniForum New Zealand is a non-profit society for the purpose of:
The group conducts an annual conference plus regular seminars around New Zealand, holds a library of overseas publications, runs an email directory and a bookshop for use by members, and produces this newsletter monthly (11 issues per year).
UniForum New Zealand welcomes new members. If you would like to join, fill out the form below, and post with your joining fee (indicated below) to the Business Office, UniForum New Zealand, P.O. Box 585, Hamilton.
Fees, including GST, for the period 1 April '96 to 31 March '97 are:
Total Enclosed: $