Welcome! Log In Create A New Profile

Advanced

What do you reckon?

Posted by DragonFire 
What do you reckon?
January 17, 2017 02:33AM
I got asked recently if I thought AmigaOS 4 would do better as an x86 port, or an ARM port, or both.

It is a commercial release, and not a "free" operating system as such. Who would pay for it?

I didn't think x86 was that sensible really. If you are going to charge money for such a thing, then people have to have an application they need the OS for. Some people might pay out of curiousity, but not many, when people can have Linux for nothing if they want to be curious. Or emulate under Windows or Mac for a basic Amiga (which is horrible for Ami 4) or run AROS (which is much faster for running native Amiga apps, much like Wine runs Windows apps under Linux).

On the other hand, a Pi running AmigaOS 4 could be a bit of a beast for robotic controller apps like 3D printing. On ARM systems, it could really do things other OS would struggle at.

AmigaOS has much better Real Time Operating System facilities, and is much more streamlined in terms of resources needed, than Raspbian.

Current 32 bit 3D Printer controllers are based around ARM, but not PIs as such. There is CNCLinux, which is an RTOS based around the PI or x86 PC hardware, but it doesn't seem to have been developed much (I could be completely WRONG on this score). Also, it does not multitask, and is kind of single shot application for control only. User input is not welcome, as that disturbs timing.

AmigaOS was always designed for 32 bit applications, from the mid 80s onwards. It does have quite a well developed 3D base, in terms of modelling software - Lightwave started on it. AmigaOS was always designed to multitask, but has never had a decent processor. Indeed, the whole software base was developed around a resource scarce processor, so it really had to be tweaked and tuned for efficiency. Only during the last year or so have the old classic Amigas been getting even moderately powerful processors, by today's standards. The Apollo Vampire programme, very small scale, but quite amazing in terms of machine upgrade.

Thoughts? I daresay some people would think it's overkill, but it seems a logical area to explore.

I don't mind bias on the subject, I myself am heavily biased on the subject, but in terms of precisely calculating accelerations and getting stepper pulses pelted out at maximum rate, it seems to have awesome potential as a concept.

Edited 6 time(s). Last edit at 01/17/2017 03:20AM by DragonFire.
Re: What do you reckon?
January 17, 2017 03:44AM
My Amiga500 served me well at the time (and atari ST) I used Real3D rather than Lightwave, DPaint Scala, but when it came to moving up I had a choice the 2000 or 4000? I really wanted the Video Toaster but it was NTSC, or the Mac 840AV (still an 86 processor) with Radius VideoVision Capture Card....I was persuaded to go the Mac route...Photoshop & After Effects, Strata, Infini-D...then PowerMac's came out I had a dabble but the writing was on the wall, I moved to PC for 3DSMAX have never looked back.

So do I want or need an amiga printer controller I'd probably pass, but interesting to see what could happen a beefy amiga could be fun but I'd have to figure out where all that old software was, and then...well it would be really old and such a time waster.

Edited 1 time(s). Last edit at 01/17/2017 03:47AM by MechaBits.
VDX
Re: What do you reckon?
January 17, 2017 05:00AM
... back then I was more focussed on AtariST, before switching to x86 ... have some working AtariST's and an Amiga500, but didn't started them since some +5 years now ... but could be interesting, if boosted with more power winking smiley

For me more interesting would be "massive multiprocessing" with cheap CPU's and cores - did my Master with simulating grids of up to 128x128 Inmos Transputers (20MHz, 1MByte per chip) to outperform Cray-XMP supercomputers in 1988, so really interested, what would be possible today winking smiley


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 17, 2017 06:37AM
Re: What do you reckon?
January 17, 2017 06:51AM
Anything that can speed up renders is what i'm after, multi cpu's needed...I dont have the amiga or st anymore but the 840av is still here collecting dust...the price I paid I couldnt let it go.

Edited 1 time(s). Last edit at 01/17/2017 07:02AM by MechaBits.
Re: What do you reckon?
January 17, 2017 10:22AM
Uh, sorry. Having old software would be an option, but the idea is, real time 32 bit controllers that can output faster than anything current, more accurately, and less costly.

ARM covers a lot of ground in this area, and the fact that it might have "Amiga" on it is more of a coincidence than anything. Actually a whole set of coincidences.

I guess it boils down to how AmigaOS 4 for ARM that they were proposing shapes up to the various firmware and system solutions already out there for 32 bit controllers for stepper motors and similar. That's a lot of variations to measure up to. Hyperion are the developers for that.

If you just want to use the old software, AROS, WinUAE or FS-UAE do a very good job of showing a classic Amiga running an old piece of software, but not a quad core ARM or more doing the same job as an Arduino Mega at crazy fast speed. That is potentially where it might end up, not just faster but cheaper. Even with a proprietary OS. If Hyperion can shift millions of crazy speed units, each unit license need not be expensive.

Anyway, thanks for replies.

Edited 2 time(s). Last edit at 01/17/2017 10:30AM by DragonFire.
Re: What do you reckon?
January 17, 2017 10:29AM
What you're saying is, a RasPi3 with AmigaOS4 could run a printer standalone, instead of only hosting Octoprint?
Re: What do you reckon?
January 17, 2017 10:40AM
Quote
o_lampe
What you're saying is, a RasPi3 with AmigaOS4 could run a printer standalone, instead of only hosting Octoprint?


Well, not exactly JUST that... Maybe such hardware, using that OS and the right software could make the printer maneuver at higher G than any current solution. That doesn't automatically increase print speed so much, depends on how fast you can get extrusion happening at higher speeds. But it could end up giving the oomph needed for controller performance boosts, in speed and accuracy and tight feedback loops and all sorts of ways.

The hardware is lean, the software is potentially very lean. It's how that stacks up with a solution I'm interested in really. F-35 control system is way more than a single PI. That too has to accurately model acceleration in a live 3D environment. Although it has to do a lot more than just that, so I guess you could compare the two approaches, one real and classified, one theoretical and kind of hazy.

Anyhow, let's see what Hyperion actually do with the idea. It's their baby. They can nurse it for a bit. They're not Open Source with AmigaOS 4, so maybe that is a dead end path as far as Reprap goes. Somehow I think it's going to happen, in some form anyway. Most of the jigsaw pieces are there already, but not quite all.

Edited 5 time(s). Last edit at 01/17/2017 10:52AM by DragonFire.
VDX
Re: What do you reckon?
January 17, 2017 05:53PM
... think about a "3D-grid" of cheap but fast processors, simulating the complete 3D building volume of your printer with "live" inserting the STL-object and calculating the slices and movig paths in realtime winking smiley

We had to simulate/calulate the electrical field inside of ion- and electon-sources and focussing structures for designs, later built for experiments at Cern or at GSI in Darmstadt.

Back in 1985 this was a pretty hard task to plan, design, simulate and correct a construction idea, until it was functional - sometimes up to weeks to months of time with a big framework computer.

The simulating times for one of our benchmarks then was something like 300 hours with an AtariST (for 1000 Deutschmarks then) and 2 seconds with a Cray-XMP supercomputer (for 3 Millions USD = 7 Millions Deutschmark then) ... my diploma thesis was made to show, that an Atari ATW (Atari Transputer Workstation) with maximum load of Transputer-blades and a price of "only" 1/4 Million Deutschmarks (108 Thousand USD then) would need nearly the same time as the Cray ... got it solved (simulated) in 2,2 seconds cool smiley

The modern small µC's are fast and cheap enough, to build a "3D-supercomputer-grid" with realtime capabilities in simulating and graphical displaying in the range of a DIY-project for only some hundred to tousand Euros/USD ...


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 17, 2017 09:17PM
I guess we must have been contemporaries Viktor. I started my career writing parallel applications on transputers, mostly for military applications and running on Meiko computing surfaces. Our standard reference for comparisons was a Vax 11/780. The transputer software written in Occam and assembler did pretty well. We did some more blue skies stuff which used the transputers to provide enough oomph to simulate 'out there' architectures like neural networks and optical computers. The optical stuff (done by a colleague of mine) was very impressive, but the estimated power budget was out of reach (roughly one small power station). The Meiko boxes weren't particularly cheap - I remember wheeling one down a corridor and realising it cost more than my house smiling smiley
VDX
Re: What do you reckon?
January 18, 2017 10:35AM
@JamesK - when were you involved in Transputer/Occam?

I've started in 1986/87 and was ready in '90 ... but only, because of "internal financing struggles" eye rolling smiley -- built, programmed, simulated and documented all my development in less than a year accumulated time ... but had a pretty interesing time with my coleagues and other (hardware) projects at the Institute of Applied Physics in Frankfurt am Main winking smiley

I did some private tinkering with graphics, raytracing and radiosity, what back then would have been a great field for parallelizing too - my work in simulating the "data-crossover" in electric fields was not so different from light and optical effects in graphical presentations ... could speed-up rendering with dividing the scene in discrete "voxels" drastically - rendering a complex scene with much reflection and refraction with a PC was nearly an hour, the estimated time with a 2D-grid of only 4x4 transputers should be some seconds only winking smiley

There were another group in the United States testing massive grids with 4-bit-processors, that got impressive results too ... so could be, a bunch of simple and cheap µC-chips, coupled with 4 (2D) or 6 (3D) serial ports and only "working" in a local 3D-space with lets's say 16x16x16 pixels/voxels and only interfacing with their direct "neighbours", could calculate/simulate a 3D-scene with enough power to outperform modern GPU's ... or program the GPU nodes for a similar architecture and "local" 3D-interfacing to make them much faster ...


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 18, 2017 11:40AM
I don't remember the exact dates, but it would have been the late 80s. We had a couple of papers published that were dated '91 but there was quite a latency between the work and the publication. Early 90s I moved to an outpost of Southampton University and we did a lot of interesting projects bridging the gap between hard-core academics and industry. Good times smiling smiley

There are plenty of application domains that are readily applicable to parallel techniques, so I was somewhat surprised that it has become less popular. There was a period of rapid increase in processor speeds that always made it seem more cost effective to wait 18 months for a new cpu than to tackle the hard work of algorithm design, and I think that contributed to the premature death of parallel processing. Of course, these days, processor gains have slowed dramatically, but the world seems to have moved towards GPU utilisation instead of general purpose CPUs. That said, the largest supercomputers remain solidly multi-cpu and presumably are being used with explicitly parallel algorithms (rather than just pushing through bulk work-load of unrelated tasks).
VDX
Re: What do you reckon?
January 18, 2017 01:16PM
... yes, my experiences/thoughts were similar - a pretty "allmighty" technology was buried in primitive tasks and dried/dyed away without big notice confused smiley

But this was not only with hardware - had the same experince with some software concepts -- around 1987/'88 I've read some papers related to simulated NeuralNetworks and did some experiments with "chaotic" solution strategies for TSP problems and CNC-controlling ... got a 99.99% result for the path-planning of a PCB with 400 bores in around 8 minutes with a simulated NN in GFA-basic on AtariST (should be some milliseconds on a 2x2 Transputer grid), while a comercial compiled EXE (programmed in C) on a PC needed nearly 30 minutes for a "brute force" calculation with only 90% to 92% accuracy eye popping smiley

The same with "self learning/optimizing" startegies for CNC-controlling of different types of kinematics (cartesian to 6-DOF robots) - the "self trained" NN-based controllers did a perfect job in "learning" to control the mechanics with only error-feedback and without any specific preprogramming ... they were only programmed/optimized for this sort of "learning" and got it running with simply injecting some sort of NC-code and positioned the toolheads with the newly created and trained algorhythms.

My thoughts were, that some of the experts and decision-makers were aware of or even panicked by the potentials of this "self-learning / self-optimizing" technologies and blocked the development to secure their jobs confused smiley

All the further developments in NN and "self-training" were drastically reduced or focussed only to specific problems ...

Edited 1 time(s). Last edit at 01/18/2017 01:18PM by VDX.


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 18, 2017 01:25PM
Interesting. We had mixed results with neural nets. It seemed to me that often you could get to 90% of a solution very quickly, but closing that last ~10% was next to impossible. As a result of that I tended to put NNs into the over-hyped category, but it sounds like you had more positive experiences. I was fond of simulated annealing for optimisation problems - if you could cast the problem domain into a suitable form the approach was usually fairly robust. The down-side was that it wasn't quick and not terribly easy to parallelize.

Now if you really want to get me started, we could get into a discussion of the progress in software engineering over the last 30 years. Or lack thereof. But I fear we may have already dragged this thread well off course grinning smiley
VDX
Re: What do you reckon?
January 18, 2017 01:35PM
... yes, this will disrupt this thread ... but not really suited for or related to common RepRap too eye rolling smiley


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 19, 2017 02:09PM
Yay! more transputer people smiling smiley
I worked for the company making the B0xx boards as general tech underling and QA chap in 1986ish, got to play with (sorry, "Soak Test") some nice big arrays and have occasional cool times with inmos R&D people.
I've still got a couple of ISA cards with T800's on them somewhere I think. Best were the double eurocard B042 just crammed with processors (42 of them) which got made from a batch of T414 produced with non functional external RAM interface.

Cheers,
Robin.
VDX
Re: What do you reckon?
January 19, 2017 05:24PM
Hi Robin,

some working T800 boards sounds linke real fun, but could be problematic to find the sparse time to reactivate the old gear confused smiley

I'm actually developing laser-applications and new "additive" methodes ... but will look, if I can use something related to "massive-grid-computing" with modern CPU's or GPU's for one of my R&D projects winking smiley


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 19, 2017 06:37PM
If we were to go with the parallel processing route, something like XMOS seems like a nice option for this. It has parallel cores with timestamped inputs and outputs for precise synchronization. I've played with their (super cheap) startkit, and it works well with a nice programming model. Some people regard it as the spiritual successor to the transputer.

And on the subject of other operating systems on the ARM, there's always RISCOS. I don't recall how good it is at real time control; you'll find some of my code in there, but at the applications level.


See my blog at [moosteria.blogspot.com].
Re: What do you reckon?
January 20, 2017 02:54PM
Quote
VDX
... think about a "3D-grid" of cheap but fast processors, simulating the complete 3D building volume of your printer with "live" inserting the STL-object and calculating the slices and movig paths in realtime winking smiley

The modern small µC's are fast and cheap enough, to build a "3D-supercomputer-grid" with realtime capabilities in simulating and graphical displaying in the range of a DIY-project for only some hundred to tousand Euros/USD ...

Victor, everybodies comments, no matter how tangentially related to the OP, have been very very welcome. By me anyway. smiling smiley

Some people were thinking physics engines and self aware machines and how you could do that a long time before other people. And tinkering and building things too. I don't think there was a "dark conspiracy to suppress" so much as a lack of easy communication between the huge number of human brains to build such things in those days. The first digital brains cannot build themselves and will not spontaneously evolve like human brains very often.smiling smiley

I looked up the history of fax the other day, and was amazed that an analogue equivalent was working quite nicely... in 1888. Commodore themselves were at the cutting edge of CAD/CAM and other IT disciplines too for a few short years.Their problem was, the software to test designs out thoroughly before manufacture was very primitive. They could draw beautifully, but they had to start wiring PALs everywhere to make prototypes work. CAD didn't really help much that way at the time. They were already to go show the A3000 with an 040 in 1990, but Motorola were not happy with that public relations exercise. They had a lot of other customers too to please, and CBM did not buy that many of their chips.

The way they planned DSP was to wire it straight into a bus and be able to hit most everywhere in a memory map with it. Their top 2 machines could do that, but most of their experimental Amiga designs came from a botched experimental production run, an early test of SMD manufacturing. Those machines could do FPU 10 times quicker than a 68040 (which was not very easy to find in 1991). CBM thought they could just buy the DSP chips. AT&T only sold them with software licenses attached, including use for telephony licenses,

Anyway, I don't think AmigaOS would be shunned released around any kind of new architecture, but I do hope for an ARM version, and faster, cheaper controllers coming from that.

I don't think anybody is that fussy where they come from, but faster and cheaper are considerations.

Edited 6 time(s). Last edit at 01/20/2017 03:15PM by DragonFire.
VDX
Re: What do you reckon?
January 20, 2017 05:46PM
... maybe no "dark conspiracy to suppress", but there seems to be some other influences, hindering potential "killer-apps" to go mainstream on their own confused smiley

One of my regular experiences is, that "interdisplinary teams" have big problems to find solutions for complex situations, which can be solved without problems by an individuum with a broader skill range winking smiley


Viktor
--------
Aufruf zum Projekt "Müll-freie Meere" - [reprap.org] -- Deutsche Facebook-Gruppe - [www.facebook.com]

Call for the project "garbage-free seas" - [reprap.org]
Re: What do you reckon?
January 20, 2017 08:11PM
True. One encounters Parkinsons Law often when dealing with teams and hierarchies. Inertia is a powerful force of its own, sometimes insurmountable because the way the micro-society is arranged does not allow for change of the current ways.
Sorry, only registered users may post in this forum.

Click here to login