Using AI in CNC

I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
—Joanna Maciejewska

Moreover, this gets into matters of pricing and so forth — at least one music composer has noted that much of the small piece work which they used to do has dried up, because people instead use a riff generated by an AI, rather than buying from a stock vendor, or purchasing the rights to a short composition at industry rates.

Ages ago, when the subject of pricing came up, I noted:

Remember that you’re not just pricing things, you’re also setting a perceived value which will affect the ability of others to charge for this sort of work as part of a business/livelihood.
Skull Mount…how much? - Carbide 3D Community Site

If we choose to accept, and thereby allow a value to AI-generated work, then how much does that cheapen and devalue human creative efforts?

3 Likes

Isn’t the first question whether we can tell AI-generated work from human-generated? That’s getting tougher and tougher.

From your link, on the Thomson v Ross decision:

Importantly, the Ross AI model does not use generative AI to generate new content; instead, its AI technology is designed to improve search results for users conducting legal research into existing court decisions.

So that seems a different use case than what we’re talking about.

The question is, is it considered fair use to train an AI on copyrighted materials — this decision decided it is not.

1 Like

I-generated images exist in a legal gray zone. They generally cannot be copyrighted, their training data use is legally contested, and ownership is unclear. Until laws evolve, creators, businesses, and users should be cautious about relying on AI art for commercial or proprietary purposes.

Would you like me to break down real-world lawsuits and cases (like artists suing AI companies) to show how these issues are playing out in practice? That could give you a concrete sense of the stakes.

AI image generators are at the center of high-stakes copyright battles. Getty Images, individual artists, and media companies are suing AI firms like Stability AI, Midjourney, and OpenAI. Courts are still deciding whether training on copyrighted works is legal, and these rulings will define the future of AI art and creative rights.

That is an incorrect assessment as a whole - the issue is nuanced. From the article:

The Court also indicated that a transformative use might have led to a different outcome.

Judge Bibas made clear to emphasize that “only non-generative AI is before me today,” signaling that courts may take a different approach in other pending cases concerning the unauthorized use of copyrighted material to train generative AI tools – which may arguably be a more transformative use.

and

The third factor asks how much of the copyrighted work was used and how substantial the copied portion was relative to the whole work. The Court held that this inquiry must focus on the amount and substantiality of the copied work that the infringing work makes accessible to the public. Since neither the headnotes nor Key Number System were accessible to the public in Ross’s final output, this factor favored Ross.

This analysis is likely to be relevant in the pending generative AI cases, particularly for plaintiffs who can demonstrate that significant portions of their copyrighted work used as training data were in fact reproduced in the new output generated by AI platforms.

For generative AI work, it’s going to be hard to identify the “significant portions” of the work “reproduced in the new output.”

There is copying and there is copying. Copying a few sentences in a research paper comparing writing styles of different authors is different than copying a few sentences without attribution in a new paper on the same subject. There are cases pending now where music publishers are going after YouTube reviewers talking about how great songs are, where those reviewers have just a few seconds of that song in the video. I suspect, if properly defending, such use will be considered Fair Use under Copyright laws.

So a while ago I decided to build a “Greene and Greene inspired” coffee table. I started assimilating Training Data for this task a long time ago. I read books on joinery by the likes of Tage Frid and Sam Maloof, inhaled every edition of Fine Woodworking, watched Norm build things on TV every week, began reading a lot about design and studied how details came together to create a style. As the time grew closer I started visiting G&G houses, taking in every detail and how they were used in combination, studied every photograph I could find of G&G work, read Darryl Peart’s books very closely and bought every plan book I could get my hands on.

I wanted a piece different from anything G&G ever built, but recognizably in their style. I synthesized things I had learned about structure, the design vocabulary of G&G, and wider principles of good design. I came up with a plan that I liked quite well and built the table.

I’d argue that although the table per se never existed before I made it, there was absolutely nothing creative about my process. I simply reshuffled existing elements and applied well understood principles exactly in the way that a generative AI program might. It did take me 40 years rather than two minutes. I don’t think my process differs significantly from what 99% of successful artist/craftspersons do when they want to make something “new”. After some 10,000 years of people and nature creating beautiful objects the number of new ideas is pretty darn small and to be seriously celebrated when it happens. Most of that copyrighted material that generative AI is sucking up is is simply recycled and repackaged ideas that have come before-- the barrier to obtain copyright is ridiculously low-- my table design is certainly copyrightable despite it being only re-use of ideas that have come before. G&G didn’t invent the cloud lift, raised pegs or any of their signature elements, but made beautiful furniture anyway.

8 Likes

Super nice table, btw!

Tom, I would be happy to read a brief break-down, link would be ok.

Do you want a world where large corporations are allowed to consume all imagery and words without compensation to their creators and to then allow them to be output on their terms?

7 Likes

Much of this banter will seem silly, when reality of the magnitude of the effects to our social structures all play out in the not-to-distant future.

The financial effects, property changes and the power consumption of the hardware needed to turn a 40 year research project into a two minute trial image will be horrendous.

It isn’t a gradual or smooth transition like when the Internet came into the picture. It is a rushed, hurried and panicky process that is playing out in areas that don’t want or can’t change.

There will be big losers.

6 Likes

In actuality, before legislation is written specifically for AI, coverage is offered for exclusivity per piece of work. This is done via legal costs, attorney fees and searches to determine uniqueness of the work, as well as filing fees and legal costs… The artesian, then has protection from encroachment. However this being said, if encroachment does happen, the copyright holder then must bring suit to initiate financial recovery. The plaintiff must show evidence of damage and the recovery needs to cover litigation fees. Too often, notification is made to the infringer to stop and desist. That’s the way it works today. much of what we’re discussing here is likely not worth the time or costs to pursue. Secondly, who will be the enforcer? I jest got a new option in Grok that takes a still photograph, and makes it talk, laugh and animate.

Of course I don’t, but history has shown me that trying to legislate to try to stop technical progress always fails. Back in the day people were worried about automobile accidents, they passed “Red Flag Laws,” requiring that someone literally walk in front of a moving car with a red flag or light. In England, non-union people couldn’t own full-sized woodworking tools - hence the “Gent’s plane” and other, smaller items. That didn’t last, either. Buy a new microwave today and I suspect you won’t be able to open the door after cooking. Heck, even the old SNL “Triopenin commercial” where arthritis medicine came in child-proof bottles that the actual patients couldn’t open was a joke based on real-life laws that had unintended consequences.

In our world today, where corporations are considered people in terms of rights, it’s going to take some really smart people to come up with laws that make sense and don’t break things even worse. How are you going to regulate what freely-available information on the web can or can’t be consumed by anyone or anything? AI can do the Captcha’s and similar today.

A web site, especially one that charges money, might put up a “what you can do with our information” requirement that has to be agreed upon first, but how will that be enforced? What’s the difference between me watching videos like this versus a computer looking through images freely posted online and then generating its own art in the same style? And if the person running the AI on his home computer (which will happen) lies about it, how do you prove otherwise?

Back when I was a student, you couldn’t bring a calculator to the SAT exam. Today, you can (although, paradoxically you still can’t bring a slide rule!). Colleges almost always let engineering tests be open book. And that all makes sense, since in the real world, you will have access to a calculator or reference books. We draw the line today at internet access during tests, but that’s probably going to go away sometime, as engineers on the job have access to the internet, too. And today working engineers have access to AI, but they still need enough engineering knowledge to know whether the AI is right and/or it’s answer even makes sense. Problem solving, judging proper application of technologies, order of magnitude analysis (“Big O”) are the skills humans still need to learn.

But, we’re way afield from the topic of fun/proper/profitable use of AI to design work that is produced/assisted by CNC. Let the cowboys warm their feet by the fire, I say.

While the power of AI is influential, I don’t believe the law has caught up here and wanted to offer a different perspective, and this may have already been stated. It’s a great thread so far and I may have just not caught up to other comments yet, but here’s my two cents -

Using AI to clean up code, plans, vectors, etc. is getting better by the day, the artwork’s legality may introduce legal challenges down the road. As artists, authors, and other creators find out their art has been used to train AI’s, any business owners using this material may find themselves in hot water down the road. At the time of my post, OpenAI has 18 open lawsuits against it for copywrite theft. $150,000 per theft is nothing to sneeze at.

Newspapers Seek $10B in Latest OpenAI Copyright Suit, Its 18th| Law.com for context.

Since the subject of this thread is simply and vaguely “Using AI in CNC,” I just wanted to point that out to some of my fellow community members who may not be entirely in the know and making money exclusively through AI.

Other than that, loving the discussion so far. I firmly believe AI has the power to drive humanity to a better place one day. In the meantime, don’t get runover by a slow law system!

Remember kids - you wouldn’t download a car!!

1 Like

Question: What’s the difference between artificial intelligence vs. large language model?

Short definition

  • Artificial intelligence (AI): the broad field of designing machines or software that perform tasks which normally require human intelligence (learning, reasoning, perception, planning).
  • Large language model (LLM): a specific type of AI trained on very large text datasets to predict and generate human-like language.

Key differences

Aspect Artificial Intelligence (AI) Large Language Model (LLM)
Scope Extremely broad — includes vision, robotics, planning, reinforcement learning, symbolic systems, etc. Narrower — focused on language understanding and generation.
Goal Achieve intelligent behavior across tasks (may include perception, control, decision-making). Model text patterns to generate coherent, contextually appropriate language.
Methods Many approaches: rule-based systems, classical ML, deep learning, evolutionary algorithms, symbolic AI. Primarily deep learning (transformer architectures) and self-supervised training on huge corpora.
Inputs/outputs Any modality: images, audio, sensor data, actions, structured data, text. Mostly text input/output; can be extended to multimodal with paired data (e.g., image+text).
Training data Varies by task (images, sensor logs, simulation data, labeled datasets). Massive unlabeled or weakly labeled text (web pages, books, code, transcripts).
Capabilities Problem-solving, perception, control, planning, multi-step decision making. Strong at language tasks: summarization, translation, Q&A, code generation, dialogue; limited explicit reasoning/action without augmentation.
Explainability & control Depends on method; can be designed with symbolic/interpret-able components. Generally opaque/black-box; output is probabilistic and can be unpredictable.
Typical use cases Autonomous vehicles, robotics, recommendation engines, medical diagnosis systems. Chatbots, writing assistants, search augmentation, code helpers, content generation.

How they relate

  • LLMs are a subset of AI: one class of models within the broader AI toolkit.
  • LLMs can be components in larger AI systems (e.g., a robot using an LLM for language understanding while separate modules handle perception and motor control).
  • Combining LLMs with other techniques (retrieval, symbolic reasoning, planning) can give more capable AI systems.

Practical implications (brief)

  • If you need generalized intelligent behavior across modalities or physical action, look beyond LLMs to broader AI methods.
  • If your problem is language-centric (writing, summarizing, conversational interfaces), LLMs are often the most effective starting point.

Task-to-Approach table

Task LLM alone sufficient? Typical AI alternative / complement
Customer support chat (answering FAQs, simple troubleshooting) Yes — often sufficient with prompts and fine-tuning Retrieval-augmented generation (RAG) for facts; routing logic
Technical troubleshooting requiring system state (logs, metrics) No — LLM lacks live access and reliable reasoning about state LLM + connectors to monitoring systems, rule-based diagnoser
Autonomous driving (perception + control) No Computer vision + sensor fusion + control systems + RL/planning
Medical image diagnosis (X-ray, MRI) No Convolutional nets / vision transformers trained on labeled images; decision support with clinical pipelines
Summarization of documents or meeting notes Yes — often sufficient; may need prompt engineering RAG for long corpora; extractive summarizers for fidelity
Personalized recommendation (products, content) No — LLM can generate explanations but not optimal recommendations alone Collaborative filtering, ranking models, causal/CTR models
Code generation from spec Yes — LLMs perform well generating and explaining code Static analysis, unit tests, synthesis tools to verify correctness
Real-time robotics control No Control theory, sensor fusion, low-latency planners; LLMs only for high-level instruction translation
Legal contract review (identify clauses, flag risks) Partially — LLMs can extract/flag but may miss nuance or hallucinate LLM + domain-specific rules, retrieval of precedent, human-in-the-loop review
Multimodal image-to-text tasks (captioning, VQA) Sometimes — multimodal LLM variants can handle this Specialized vision-language models or combined pipelines
Fraud detection (transactional patterns) No Time-series models, anomaly detection, graph ML; LLMs can help explain alerts
Creative writing (stories, marketing copy) Yes — LLMs excel here Human editing, style models, brand guideline filters
Scientific literature search and synthesis Partially — LLMs can summarize but may hallucinate citations Retrieval systems, specialized search indexes, domain models
Interactive tutoring with stepwise math reasoning Partially — LLMs can teach but may err on multi-step rigor Symbolic math engines, step-by-step verifiers combined with LLM explanations
Data extraction from forms (structured fields) Yes — for many cases using prompt + format constraints OCR + extraction pipelines + validation rules
Regulatory compliance monitoring (changing rules) No Rule-based systems, dedicated compliance pipelines; LLMs for drafting alerts/explanations

Notes

  • “Partially” means LLMs help but should be combined with retrieval, verification, domain rules, or human oversight.
  • When LLMs are “sufficient,” add safeguards: retrieval augmentation, validation tests, or human review depending on risk.

PS. Of course I used “AI” to create this information in two minutes! :smiley:

3 Likes

AI is great for getting a job done in a hurry, and I do use it sometimes. But I bother to design and make stuff so that I can see my ideas become something concrete, learn as I go, and express myself. It’s OK if my drawings are a little crude. There is a reason that hand-made stuff is valued more than identical items rolled off an assembly line. David Pye had a lot to say about this.

2 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.