Welcome! Log In Create A New Profile

Advanced

Firmware architecture question trajectory planning

Posted by nilrods 
Firmware architecture question trajectory planning
April 08, 2015 03:17PM
I thought I would ask this question as I could not find a good answer already in the forums.

Is there a reason why most(all that I have seen) reprap firmwares don't offload some of the trajectory planning to some previous step, similar to how slicing is done before? My thinking is that most of the arduino's are limited in computing resources(compared to say a pc, beaglebone or pi) already. I would think with everything else it is trying to do, step generation, heater monitoring and all that offloading any of the logic it could would be beneficial.

My thoughts, since most slicing is done on a pc or some other device ex. pi on octoprint why couldn't some preprocessing of the trajectory planning be done before sending to the arduino? Something along the lines of preprocessing of the gcode to send only position, velocity, acceleration and deceleration values for each move for each axis. Since they are running open loop truthfully all trajectory planning could be done ahead of time, I would think. It is really just a more detailed version of the Gcode already being sent.

I would think this would have to be better to offload that functionality to another device. It is not like most reprap are running closed-loop and having to respond realtime to motor changes, I don't think arduino's even could without some separate device handling the encoder data.

It might also allow easier integration with smarter stepper or servo controllers down the line.

Just wanted to see others thoughts or reasoning on the subject. Maybe there are good reasons that I am just not seeing.

Thanks
Re: Firmware architecture question trajectory planning
April 08, 2015 06:09PM
Yes you could offload some of the planning to a preprocessor, but this has a number of disadvantages:

- It would be another step to run before you print, unless it was integrated into the slicer

- That preprocessor would need to know a lot of information about the printer, e.g. maximum speeds and accelerations for each drive

- The volume of data to be sent to the printer or copied to the SD card would be a lot higher, unless data compression were used

- Facilities such as pause/restart and adjusting print speed on the fly couldn't be done, because they impact on the planning.

I agree that Arduino Mega is not the ideal platform for running a 3D printer - although most people seem to find it adequate. There are a number of 32-bit MCUs that cost around half the price of an atmega2560 while providing much more performance and RAM. So I think the future is low-cost 32-bit controller boards to replace Arduino/RAMPS. I have one on the drawing board already. At the higher end of the market, there are already several 32-bit boards such as Duet, Smoothieboard and others, which provide other nice features such as Ethernet and software control of stepper motor currents.



Large delta printer [miscsolutions.wordpress.com], E3D tool changer, Robotdigg SCARA printer, Crane Quad and Ormerod

Disclosure: I design Duet electronics and work on RepRapFirmware, [duet3d.com].
Re: Firmware architecture question trajectory planning
April 08, 2015 06:56PM
Excellent points. That was why I brought it up in this forum was to get others thoughts. It is always refreshing to see someone else's perspective.

Just to add to a couple of your points.

1. I was thinking it would be like a postprocessing step for a slicer.

2. agreed

3. possibly, yes. Possibly 32 bit binary position, velocity, acceleration, deceleration might be binary-to-text encoded to save some size, but of course then there is decoding overhead. Of course this makes it less readable too though.

4. I don't see pause/restart being affected significantly. Only that it would finish the current move before pausing but since most slicers I have seen break moves up to such small movesI don't see major impact here. Adjusting print speed on the fly would definitely be impacted though as you say. Not sure if there would be an easy way to have a scaling function in the firmware, might work for Cartesian but I have no idea on the impact to say a delta. Truthfully I have never adjusted the print speed on the fly outside of a test or two. Usually to me this is more related to a CNC mill or lathe or something, but your bring up a valid issue.

I agree with you on the 32-bit mcu's I do think they are more suited to 3d printing.

To be honest I think probably something like a beaglebone with pru's or fpga shields are really where the future will be going offloading the step generation, encoders if closed loop,and possibly other actions to hardware and leaving the things mcu's are good at on the microcontrollers.

Thanks for the feedback.
Re: Firmware architecture question trajectory planning
April 09, 2015 03:46AM
The issue with pause/restart is that you need to stop the print cleanly, but after the planning is completed you can get long streams of moves that are merged into each other without the head coming to a standstill between them.



Large delta printer [miscsolutions.wordpress.com], E3D tool changer, Robotdigg SCARA printer, Crane Quad and Ormerod

Disclosure: I design Duet electronics and work on RepRapFirmware, [duet3d.com].
Re: Firmware architecture question trajectory planning
April 10, 2015 11:50AM
That makes sense. I can see how that would be impacted then.

Thanks
Re: Firmware architecture question trajectory planning
April 17, 2015 05:59PM
Hey,

This is not quite what you asked about, but I will mention a possible way to improve the performance of the planning calculation *in the firmware*. Note that I have implemented this in my APrinter firmware (but the planning there overall may not be faster due to a much different design...).

First let me give a short introduction to the planning stuff. The core of lookahead-based planning algorithms in most firmwares is a circular buffer of linear moves. Moves are added to the back of the buffer as gcode is received, and removed from the front as steppers execute them. The goal of the planning algorithm is to calculate the exact characteristics of the individual moves; this involves splitting each move into three parts (accelerating, constant-speed, decelerating). The problem is "difficult" because of the look-ahead, as you want to preserve a certain amount of speed at junctions between moves. Typically this is solved with a two-pass algorithm that traverses the moves first backward then forward.

Importantly, in typical implementations, this algorithm is run each time that a new move is added to the buffer. In the worst case, you need to recalculate all the N moves if N is the size of the buffer. So, the more lookahead you want, the slower it runs.
What I have discovered is that performance can be improved if you don't run the planning algorithm each time a move is added, but you accumulate a few moves, then run the planning. So instead of doing an O(n) calculation on every new move, you do it in every m moves, resulting in an amortized performance of O(n/m) per new move. If m is a fraction of n, that gives you O(1) amortized performance per new move (e.g. m=n/2 --> O(n/m)=O(n/(n/2))=O(1)).

Though once you're doing this, you need to consider what kind of lookeahead this enables. The common implementation without this optimization gives you n-1 lookahead, but now that you're accumulating m moves, you generally get n-m lookahead - you need to use more memory to achieve the same look-ahead, but it runs faster. Additionally, lookahead is actually non-uniform, it is really between n-1 and n-m, depending on the particular move. So in cases when the speed is being limited due to the buffer size, there will be jerky motion.
Re: Firmware architecture question trajectory planning
April 22, 2015 12:13PM
Sorry, I just saw your response.

I can see what your talking about. Your way does look like it would be more efficient with minimal negative impacts. It does seem smarter than looking each time. Also, it does look like it would be easier to do on the fly that way.

I was just thinking that really most of the calculation you mention like accel, constant, decal segments can be determined, since the future moves are known, well in advance of being fed to the firmware. My thinking was just to pre-calculate all that look ahead planning with calculations (accel, constant vel, decal, etc) for each move/segment and just rely on the firmware to integrate those values to produce the appropriate steps. Of course, as previously noted it does have some impact on other areas such as pause/restart, etc.

Thanks for the feedback.

Chris
Re: Firmware architecture question trajectory planning
April 22, 2015 12:55PM
I'm sure it's true that you can reduce the planning time by bunching moves so as to add several at once. However, when debugging my fork of RepRapFirmware, I found that when adding a new move, it generally only has to review one move already in the queue, and very rarely 2 moves. I never saw it having to review 3 or more moves. It's only when the previous move is a pure deceleration move (and remains so after you adjust it) that you need to look at the one before that; and so on. Long sequences of deceleration-only moves don't occur in normal printing.

Edited 1 time(s). Last edit at 04/22/2015 12:55PM by dc42.



Large delta printer [miscsolutions.wordpress.com], E3D tool changer, Robotdigg SCARA printer, Crane Quad and Ormerod

Disclosure: I design Duet electronics and work on RepRapFirmware, [duet3d.com].
Re: Firmware architecture question trajectory planning
April 22, 2015 01:28PM
@dc42
I have also wondered about how many moves need to be processed each time. My Apritner firmware does not optimize unnecessary computations, it does a full planning round when it decides to recalculate. When I tried to optimize this (by going back only as far as something changes), I found that it took more cycles on average, so I didn't go with it. Though, this was probably after I implemented this bunching so it doesn't exactly contradict what you're saying.

On the other hand, I'm pretty sure that it's easy to hit the worst case with the right input. I think that generally this is any input where you are constantly "out of lookahead", i.e. the speed is artificially limited because the buffer is too small given the desired speed and the max acceleration. I believe you would get into a situation where all planned moves are decelerating, and you are repeatedly turning the first move into a constant-speed one while increasing the speeds in all the other moves. This situation is easier to achieve when you have a non-cartasian coordinate system, and the firmware handles this by splitting up the move into small pieces (I know dc42-firmware does not do it this way).

Edited 1 time(s). Last edit at 04/22/2015 01:29PM by ambrop7.
Sorry, only registered users may post in this forum.

Click here to login