“Distributed Graphics: Where to Draw the Lines?” Moderated by Richard (Dick) L. Phillips

  • ©Richard (Dick) L. Phillips, Michael E. Pique, Cleve Moler, Jay Torborg, and Donald P. Greenberg

Conference:


Type:


Entry Number: 14

Title:

    Distributed Graphics: Where to Draw the Lines?

Presenter(s)/Author(s):


Moderator(s):



Additional Information:


    Transcript of the welcoming speech:
    Good morning, ladies and gentlemen. Welcome to the panel entitled Distributed Graphics: Where to Draw the Lines?

    My name is Dick Phillips. I’m from Los Alamos National Laboratory and I’ll be your chair this session. I’ll be joined by a great group of panelists — friends and colleagues all.

    Our second speaker following me will be Michael Pique from Scripps Clinic. Following him will be Cleve Moler from Ardent Computer. After Cleve we’ll hear from Jay Torborg who is associated with Alliant Computer. And batting in the clean-up position is going to be Don Greenberg from Cornell University.

    I have to give you one administrative announcement. You probably know this by now if you’ve been attending panel sessions all week. But once again, these proceedings are being audio taped for subsequent transcription and publication. That means that when we open up the session for question and answer, which will be in another 30 or 40 minutes, if you would like to ask a question, you must come to one of the microphones that’s situated in the aisles. They are just about in every aisle, part way back and close to the front. And to be recognized, please state your name and affiliation, and I’ll remind you of that when we get into the question and answer session.

    The title of our panel begs a question — where to draw the lines. Well, the trivial answer to that question is obviously on the display that you have available. The real implication of that title was where to draw the lines of demarcation for graphics processing. You’re going to hear from me and from my other panelists several different points of view. Just when you thought everything was settling down and it was clear that all graphics processing was moving out to workstations or graphic supercomputers, you’re going to hear at least two different points of view that may sound a bit nostalgic.

    Let me take you back in time just a bit, and this is a greatly oversimplified graphics time line — where we have been and where we are and where we’re going in the evolution of visualization capability.

    I’m not going to dwell too much on the part of this time line to the left. We’re really interested in what’s up at the right hand side. But I can’t resist pointing out that back in the days which I have labeled pre-history here, a lot of us can remember getting excited about seeing output in the form of a printer plot, thinking that we were doing visualization and that that was really computer graphics. And I for one can remember the first time I had 300 band available to me on a storage tube terminal and I thought this is blazing speed. I cannot believe what kind of graphics capability I have got now.

    Where things really get interesting though, if you move along that time line to the right, up into the mid 1980s, I have put some I think seminal events on there — Silicon Graphics introducing the geometry engine in the workstation. Well, workstations in general. That was a real watershed event that has changed the way that we do graphics and where we do graphics considerably.

    Then as we move into the later part of the 1980s, I have noted the appearance of graphics accelerators for workstations. These are specialized plug-in boards that have all of the graphics features like Phong shading and high speed transformations built into them. Graphic supercomputers like Ardent and Stellar and HP/Apollo have appeared in that time frame. Then we look a little bit further into the ’90s and I have indicated the occurrence of very high speed networks is going to have a profound effect on the way we do graphics display and how we distribute the activities that are associated with it.

    Let me give a very oversimplified couple of statements on what gave rise to the need for specialized graphics hardware — the accelerators that I talked about and indeed the graphic supercomputers. As I’ve said, to terribly oversimplify, it was certainly the need for real time transformations and rendering. All of the advances in computer graphics over the last 10 or 15 years, many of them we can now find built into the hardware of the workstations and graphic supercomputers that we have available to us.

    One of the other reasons for wanting to bring all of that high speed computational capability right to the desktop, as it is, was to compensate for the lamentably low communication bandwidths which we had then — which we have now, as a matter of fact. And I’m even including Ethernet and I’ll be bold enough to say that the FDDI, which is not really upon us, is also in that lamentably slow category for many of the kinds of things we’d like to do.

    It turns out — in my view, at least — that that specialized hardware, wonderful as it is for many, many applications, and make no mistake, it has revolutionized the way that we can do interactive graphics — it’s not useful for all applications.

    One application that I’ve listed as a first bullet is one where we’re doing specialized rendering — research rendering let’s call it. Not everything we wanted — not all the research in rendering has been done — right? So Gouraud shading and Phong shading and so on is not the be-all end-all necessarily. There’s a lot of interesting work being done. It has been reported at this conference, as a matter of fact.

    That is really a small reason for wanting to do the graphics computing on yet another system. But the next one that I’ve listed is a very compelling reason in many installations, particularly where large scale heavy-duty simulations are being done. I’ve mentioned that I’m from Los Alamos and that’s certainly one center where there are computations that are done on supercomputers and that need to be visualized, and because of the nature of the computations all of the specialized hardware in accelerator boards and in graphic supercomputers is not necessarily useful. Indeed, I’ll argue that in many cases it’s of no value whatsoever.

    The last point I want to make here — before I show you a couple of specific slides of these simulations that I’m referring to — is that what will happen is that the emergence of very high speed networks — both local networks and international and national networks — is going to provide a way for these large scale simulations to take advantage of graphics hardware that does not necessarily have the specialized capabilities we just talked about.

    At Los Alamos a group of folks in our network engineering department have taken the lead in defining what is called the High Speed Channel specification. Before I get to that, let me just give you an idea of the kinds of computations that are being done at Los Alamos — and I know at many other places — that simply can’t take advantage of the specialized hardware that I’ve just been referring to. This happens to be the mesh that’s associated with a finite difference computation for some simulation. It doesn’t really matter what it is, but I just wanted to show you that we’re talking typically tens of thousands of individual mesh points, and I can guarantee you this is a fairly sparse mesh compared to the kinds of things that most of our users encounter.

    The point in showing you this is that as the simulation evolves in time, there is a different version of this mesh for every single time step. The scientists who are doing the simulation would like to be able — either after the fact or perhaps if the timing is appropriate — to steer the computations that are going on by being able to visualize the evolution in time of meshes like this. And they need to be sent to some display device. And ideally you’d like to do that at the rate of 24 frames per second, but we can go through some computations and find that’s simply not feasible with the kind of network bandwidths that are available today.

    The specialized hardware that I’ve just been talking about gives us no help at all here, because what I need to be able to do is to send one instance of this mesh to the display device for every time step, as I mentioned a moment ago.

    In addition, the scientists at Los Alamos and other places would like to be able to have the counterpart of a numerical laboratory. This is completely synthesized, but you can — and many of you may have had experience in the past with visualization techniques and fluid flow, where you can actually see shock waves by various lighting techniques. The intent here is to be able to simulate that situation and be able to show the flow evolving — not necessarily as it’s being computed, but perhaps after the fact — but be able to pick out important points by seeing a temporal evolution of that particular simulation.

    So those are just a couple of examples that have given rise to the development of a high speed channel specification and an accompanying network at Los Alamos, and I wanted to say right now — just so you don’t think oh, great, a special purpose solution for a national laboratory that no one else will ever be able to use — not so.

    Many of you out there I am sure know — and I know several of our panelists are either aware of or working on high speed channel hardware for their particular products. There are about 30 vendors that have signed on to the high speed channel specification.

    In addition, Digital Equipment Corporation is building the corresponding network, which is called CP*. I’m not going to go into network details here because that’s not my point. I really wanted to describe what is now a new highway for data transmission that facilitates my job, which is to help the scientists do the visualization that they need to do.

    So what we’re seeing here is a very simplified view of how this high speed network, which is spec’ed at 800 megabits and a corresponding cross bar switch-style network that is going to allow effective point-to-point connections between the various components of the computing environment — the supercomputers, the data storage devices, and the display devices. And each — unlike with a bus structure — each user will effectively have that complete bandwidth available to him or her.

    A larger view of that network is shown here and it gives us an idea of how we might interconnect the various devices and again. I don’t want to go through the details here, but you notice that we’re accommodating FDDI gateways, so that the FDDI LANs can be used easily in this environment, and various workstations. I’ve shown a Sun workstation there. I described vendors who are signing on to this concept, and Sun is providing a high speed interface to their TAAC board, which can then be put into a Sun workstation, and connected at 800 megabits directly over the network.

    I mentioned earlier that this is not necessarily limited to just our local area networks. Many of you are probably aware of the work that’s going on now to establish these so-called national data highways. The Corporation for National Research Initiatives is coordinating an activity to establish centers throughout the United States that will participate in a test bed of what is to become in the 1990s a four gigabit data highway spanning the United States.

    So I’d like to leave you with the thought that while we have migrated a lot of the graphics computing capabilities — graphics related computing requirements to workstations and specialized graphic supercomputers — the emergence of extremely high speed data communications makes one rethink these situations — particularly when you are faced with the kinds of computing tasks that I just mentioned that we have for large scale simulations at national laboratories.

    I’m going to stop my heretical remarks here and I’m going to turn it over to the panel to describe several different points of view.

    As I mentioned earlier, the next speaker will be Michael Pique from Scripps Clinic and Research Foundation. Michael? Thank you.


ACM Digital Library Publication:



Overview Page: