Coder Request: 10W Scanner

Recent discussion has given me ideas, but sadly while I can do a lot with g-code, I’m no good at any web language. There’s been some discussion about using the built-in thickness measurement on the 10W laser to make auto-adjustments to the Z height. While currently this is not an option, Z movement to follow objects is (if you use my non-planar guide). So here’s my idea and hopefully someone is smarter than I can implement it. :slight_smile:

A simple webpage with a few options for input;

Snapmaker IP (input the IP of your snapmaker, maybe there can be an auto-scan like Luban does to auto-fill)
Origin Location (The X/Y location of your intended origin)
Object Size (The size in mm of your planned project, maybe two boxes one for X and Y, or a combo doing X,Y format)
Interval spacing (How fine you want the mesh, small spacing takes longer, maybe default 10mm)
Laser offset (During my testing, the measurement laser hits the object 73mm in the X- compared to the blue diode, so this could be default, but you can test your own offset. I’ve not done Y offset tests yet)
Jog Speed (default 3000 or 6000 just to help speed things up, snapmaker uses 1500 in Luban)

The internal process would be after a user inputs these options, some simple math is done in the background to take your origin location (say 0,0) add the laser offset to ensure the red laser is on the actual origin (making it 73,0). These numbers are plugged into the web api to initiate the measurement cycle.

http://{Snapmaker IP}:8080/api/request_Laser_Material_Thickness?x={X Origin}&y={Y Origin}&feedRate={Jog Speed}

Then after it gets a response similar to this: {"status":true,"thickness":12.56268310546875} truncate to 2 decimal places, i.e. 12.56 and save this height at the X/Y location. Afterwards, augment the original X origin by the Interval Spacing, repeat until it reaches the Object Size. Do the same for Y so it moves up to the next row, and begin decreasing X back to the starting origin until it reaches the object size in Y.

Then take the saved positions and build an STL, kinda like a flat lithophane, to make a visualization of the scanned object. Between each point could be averaged to smooth it out, but pretty much mimicking the way the bed level visualizer plugin for octoprint works, just outputs to an STL.

The resulting STL could then be used to build the project on in Fusion360 or wherever so the movements can be non-planar and follow the object.

Or this could just be rambling and not really viable, is early morning and I’m still tired. :upside_down_face:

EDIT: According to how the toolhead moves from measuring to setting origin. There’s an X70 Y-8 difference.

Red measuring laser:
red

Blue diode laser:
blue

Unfortunately, AFAICT, the 10w laser focus system is entirely closed source. I see no references to it in the github firmware :frowning:.

(Unless of course, you have some GCODE command that can get the focus, 'cause if so, I’m basically building this idea right now (but, using a dial indicator and octoprint to automate @Tone’s “level gcode hot” idea)).

My method laid out above should work, using that URL I’m able to dictate where to move the toolhead to and run the measurement and it returns the thickness as shown above. You can try it yourself, just replace the snapmaker IP line to match yours, X/Y where you want it to measure at, and give it a feed speed. Visit that URL in any browser and it’ll begin moving and take the measurement at the X/Y location you set and return the thickness as I pasted above.

I’ve been using it the past couple days to almost completely automate my laser projects, the only thing I have to do is put the object on the platform.

My workflow is now this:

After fixating my object to the table and closing the enclosure.
1: Generate my project in Lightburn and export
2: Run the measuring API
3: Add G53 above the initial move + add Z(laser height+thickness from API) to the initial move line
4: Save and load into Luban
5: Start on Luban and when it asks thickness, just put an arbitrary high number (like 290) since I put the focal height in the gcode itself. (you can also just put in the measurement from step 2)
6: Close Luban if I don’t need to monitor/change things.

Saves a lot of walking back and forth, don’t have to manually jog it down to touch if not in the center, etc. I’ll of course do more testing to find more, but this is my current workflow.

1 Like

Oh! I misread your first post, I thought you were describing the API you wanted… this is super easy then, like just a few lines of javascript should do it lol (well, generating an STL might be tricky, but iterating over points and measuring is simple if there’s an API)

Maybe the source for image to lithophane could help.

Nice, looks doable, maybe at some point I could get to this.

Have you tried it on a curved object and does it actually work?

I assumed the thickness detection only worked reliably because it assumed the object was flat.
My assumption is that it looks at the location of the dot, moves the head down, looks at the new location of the dot and given the parameters you can estimate the thickness. But this assumes that the two measurement points are at the same height.

Note: I did not try this and haven’t used this feature (always manually set the right height) so I can definitely be wrong.

I’ve not tested a curved object yet, was on my list of tests to do yesterday but too many life things to do got in the way. Also, the two stage where it moves up and down is only in the calibration. During regular use, it just moves to the center, turns on the red laser for a second, then spits out the measurement.

Aha, ok. thanks for the clarification.

I should have a deeper look at it then and see how they implemented this. Summer holiday period is coming up for me and I was hoping to spend some time on my 3D touch probe experiments again (3D touch probe connected to can bus). This could be an alternative. (I’ve gotten the individual parts to work more or less but still need a lot more to get it really usable)

The actual interface needed could be exactly the same, just a different trigger.

I prefer the idea of the touch probe as it also allows to find edges (and thus the center of an object or hole) as well.