Blog Page

Uncategorized

Analyst Mocks the Idea That It's 'The End of Programming' Again – Slashdot

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
The GPT thing is very good at making text that look like what you asked it to.
Which makes it great at making code that seems correct but isn’t

The GPT thing is very good at making text that look like what you asked it to.
Which makes it great at making code that seems correct but isn’t

The GPT thing is very good at making text that look like what you asked it to.
Which makes it great at making code that seems correct but isn’t
And that is exactly the point in a nutshell. This thing can emulate style and make things look or sound great. It cannot do the details though and in engineering details are critical.
Mycroft seems real enough to me:
https://en.wikipedia.org/wiki/… [wikipedia.org]
/s
Indeed. And that stupidity is _old_. For example, Marvin “the Idiot” Minsky claimed that as soon as a computer has more transistors than a human brain has brain cells, it will be more intelligent. Completely clueless drivel, of course (Neuroscience struggles to model single human-complexity level brain-cells completely and they are using a lot more than one transistor in their attempts), but many people believed that because they cannot fact-check and it came from some “authority”.
“The Moon Is A Harsh Mistr
You bet I could! I’m not a bad programmer myself!
We don’t have to sit here and listen to this…
The guy who wrote the original article is selling AI. If people believe him he stands to make more money, therefore his opinion doesn’t count.
One problem with questions like these is:
1) Even though we are probably aren’t not much closer to AI doing significant programming tasks
2) Almost no one will be able to accurately identify when we are close to AI doing significant programming tasks
I don’t see anything in recent OpenAI or other similar technologies which leads me to believe programmers are at risk of being disrupted by AI, but I don’t really think I’ll know it when I see it.
Its important to point out; we are for the most part not asking the ai how to do something, we are asking it to do something for us. In the end most of us won’t care if it does a good job or not. We just want things to turn on when we press a button.

Look at the mess South West has. They can’t track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can’t integrate a new tracking system into it. AI won’t help with that.

Look at the mess South West has. They can’t track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can’t integrate a new tracking system into it. AI won’t help with that.
I’ve heard they have to reboot their scheduling system every night to keep it functioning. They’ve been accumulating IT debt for decades at this point.

Not yet. But the AI image and text stuff do give me hope that, in fact, the “holoprogram from a few sentences” may one day be possible.

Not yet. But the AI image and text stuff do give me hope that, in fact, the “holoprogram from a few sentences” may one day be possible.
The problem with the current batch of AI generators is that they’re no more than a statistical compilation of everything they’ve seen, grouped by subject. It follows that, from that summary, you can only retrieve content that it has seen elsewhere.
Surely it can mix and match that content in new ways if you know how to ask carefully for the correct subjects; but it can’t be used to solve problems that it has not seen before. It does not have a component of ‘creativity’ in the sense of building new code to so
I wrote an article about my experiments with ChatGPT where I asked it to build a SwiftUI form with name, address, city, state and zip fields. It did. I asked it to move the fields to a view model. It did. I asked it to add some validation and it did. I asked it to create a struct from the data, populated from the VM. It did. I asked it to add a function to populate the VM from the struct. It did.
And it did all of that in less than two minutes.
I wouldn’t ask it to build an entire app from scratch. But it’s v

I can see dedicated IDEs coming that do this sort of thing automatically. At which point developers will be free to concentrate on other problems.

I can see dedicated IDEs coming that do this sort of thing automatically. At which point developers will be free to concentrate on other problems.
This is such an old dream.
Boilerplate code seen as tedious waste of time. Programmer makes tool that generates boilerplate code automatically. Programmer invents new framework with no boilerplate code. New methods of writing boilerplate code are developed. Signal repeats.

I just finished a 900 hour contract and wrote 20,000 lines of code. I spent less than 200 hours writing the code. I spent 40 hours in meetings, 100 hours debugging, 100 hours writing documentation

I just finished a 900 hour contract and wrote 20,000 lines of code. I spent less than 200 hours writing the code. I spent 40 hours in meetings, 100 hours debugging, 100 hours writing documentation
Where to you work ? 🙂 Where I work the 900 hours translate to this: 500 Meetings, 100 writing Code, 200 keeping Project Mang System updated, 100 Test (on a good project). Testing usually means only do enough to tick off the boxes in a form.

Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop.

Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop.
Respectfully, you and other professional programmers underestimate how very rare it is to have the capacity to write functioning code. I teach 2nd-semester (community) college CS majors, and the majority of them can’t write a for loop, declare an array, or understand scoping. Even if they can do that, then it’s likely they can’t read a specification written in clear English.
Exactly this.
Programming is translating human intent into a format the computer can understand—code. People see an AI generating code and say, “It’s coming for your jobs, programmers”, but for that to be the case those people would need to be able to perfectly express their intent to a computer, or at least validate that their intent had been successfully expressed…at which point they themselves would be programmers, just with a natural language rather than a formal one.
What they
Way back in the before time, we had Junior Programmers, the greenest of which were called “code monkeys”. They would do the tedious actual writing of code to create functions specified by Senior Programmers. The job required a high school diploma and good results on an aptitude test. Meanwhile, the Senior programmers educated them so they could become Senior Programmers after a few years.
The Senior programmers did a lot more thinking and specifying and a lot less actual coding. They were in short supply so
AI isn’t going to replace programmers immediately. But what it is going to do is reduce the amount of work they need to do, which means either more programming can be done, or there will be less programming jobs.
The AI commonly produces bad answers… but it also commonly produces good ones. Programmers will spend more of their time writing test cases, which are a good idea anyway, and some of the software will be written by the computer.
Writing good test cases is hard, but it’s already necessary.
The point
That will not work and cannot work. Artificial Ignorance coding will have very specific security problems in it that are not accessible to test cases because they will be too complex and testing for security is already hard for simple things and generally several (!) orders of magnitude harder than regular testing. But attackers that find these can then exploit a mass of systems with them because the same mistakes will be in a lot of different code.
Testing is very limited in what it can do and can never ass
Can we teach an AI pen testing?

Can we teach an AI pen testing?

Can we teach an AI pen testing?
Nope. And there is no need to. All the standard attacks are already automated. The rest requires a creative and experienced human driving things. Caveat: While I have not personally pen-tested things (well only so minimally it really does not count), I have closely worked with pen-testers and know several.
Incidentally, pen-testing is very limited in what it can do. It cannot replace an in-dept security review. It cannot replace a code-review. It cannot do anything beyond very shallow things. And it is never
I’ll see if I can run with that analogy a bit further.
Your original goals or predictions can come about in a completely unexpected way. “Linux on the desktop” was sort of code-speak for “we want mass adoption”. At this point, smartphones have largely replaced what a PC used to be for most people (basic digital consumption, communication, entertainment, and simple personal tasks), so the desktop really isn’t even the ultimate mass-adoption target anymore. But Linux is used almost everywhere else… everyw
In the past computers have been really good at some things, really bad at others. Some of the things they were bad at, humans were good at. That’s where AI is having a big impact. It lets computers be good at the things they used to be bad at but humans were good at.
That doesn’t change the things computers have always been good at. If you need a program to process financial transactions or a device driver for a new GPU, you aren’t going to write it by training an AI model. You need code that follows we
“Machine Learning” does not exist. It (and AI) are moron-speak for Statistical Modelling.
If you need a program […] for a new GPU, you aren’t going to write it by training an AI model. You need code that [..] produces exactly the right result every time
I agree, sure, but would you please mind letting AMD know?
It strikes me that while the amount of grunt work necessary to make any kind of “app” has gone down somewhat over the past 30 or 40 years, the grunt work has always required the smallest portion of yhe developer’s mental cycles, compared to the actual business logic.
At work we’ve got code, some of which dates back to the 80s. Aside from some references to the memory structure of the VAX or whatever it originally ran on, most of the code is generally equivalent to what one would write today. In some places t
Indeed. The fact of the matter is that coding is a creative act that does require understanding. Like all engineering design work. And if you look at established engineering fields (which coding is not at this time), the one part they can never get rid of is the engineer doing the thinking and the designing.
> Generalized AI is still 100+ years out.
Hard disagree. We have no definition of intelligence yet, or even a basis from which to describe it. We can measure it, but the only true measure seems to come in high stakes games for which humans are the only viable participants. the AI developed so far is little more than a tool for a human to use, and not a competitor to a human.
The best we can do right not is measure intelligence with super primitive means such as turing tests. There has been zero, ZERO prog

Generalized AI is still 100+ years out.

Generalized AI is still 100+ years out.
I used to think like that, too. Then, after reading article after article about the progress AI has made in the last 50 years, I realized that I was short by at least an order of magnitude.
I’m convinced that it is at LEAST 1000 years out.
Modern AI hype very closely resembles the notion from The Time Machine that steam power will enable time travel. It just isn’t going to happen. We have neither the hardware nor the software to make AI anything more than glorified code completion.
If your job is so trivial that it can be automated, then it *should* be automated.
Factory jobs are an example. These jobs are mind-numbing, dehumanizing work. Automation is a good thing, freeing people to do more human things with their time. Yes, I realize that some people can’t be, or don’t want to be, retrained. Change takes time, but that doesn’t mean change shouldn’t happen.
One impetus for the Babbage engines was to create the navigational and mathematical tables. Skilled mathematicians would create the non linear brackets and then semiskilled labor would compute the linear intervals. It would be
… but keep in mind that the “AI” Alexa provides is actually just warehouse sweatshops of real people. They sell it as AI but it’s the furthest thing from it.
If Alexa is really powered by people (in any significant way), they aren’t worth their food rations. Or it’s amazing how perfectly they make mistakes that machines would make, so as to hide their real nature.
But I have more gotten tired of the same stupid crap being claimed again and again and again. Programming is engineering (No, I will not discuss this, if you cannot see it, then that is a limitation on your side.) and engineering is hard and cannot be automatized because you need to understand what you are doing. All the stuff that could be “automatized” has already been put into libraries or can be put into libraries. For the rest, it is just not possible. Artificial Ignorance is dumb as bread and can only do statistical classification, pattern matching and lookups in catalogs. It has zero clue what it is doing. It has no insight. It has no common sense. And it will remain that way for the foreseeable future, because we have absolutely nothing, not even in theory, that could do better. (Just for completeness: Go away physicalists, nobody knows whether humans are “just machines”, but it very much does not look that way at this time. Take your deranged, self-denying religion someplace else.)
Asd to the claims to “programming going away” or “being automatized”, these are basically as old as programming is. When I went to university about 30 years ago, the 5GL project that was supposed to automatize programming using constraint solving had just failed resoundingly. The idea was that you specify your problem using constraints and the machine generates the code. Turns out constraint solving is too hard for machines in the complexity needed. Also turns out (as a side result) that specifying a problem using constraints is about as hard as writing code for it directly and requires more expertise and experience. Since then, this moronic claims have cropped up again and again and again. I have no idea what the defect of the people making these claims is, but it must be a serious, fundamental defect, because they will just not go away and refuse to learn from history.
Ahahaha, yeah, I forgot that classic fiasco from around 1960 (!). To be fair, I was not born yet at that time.
This is a great and concise observation, should be upvoted.
I have been programming (it used to be called that) for pay for a half century now, and I can’t recall a year where it was not predicted that the job of programming was going to be automated. In just a year or too look at all the progress we are making.
There is something about the lay understanding of technology that promulgates this grail as something that is real. So we keep getting these predictions.
I’m not an expert in AI by any means but I feel like AI is still far away from “understanding” what it’s doing or working with and perhaps understanding is the most important part of trying to translate something into code.
As to whether “understanding” requires consciousness I’ve no idea. I hate the term because it’s so loosely defined and real wet AI cannot quite understand it and whether it’s necessary at all to be truly intelligent. E.g. many biologists claim that ever simple forms of life such as gras
AI == Coder. A monkey that knows not what it is copying and pasting, merely that is a statistically significant snippet.
Copying and pasting a bunch of “Statistically Significant Snippets” does not a working program make.
There exists exactly zero working programs created by this method. They have all been massive and spectacular failures.
The Jevons paradox [wikipedia.org] shows how by decreasing the amount of a resource needed to make a product, you can actually increase the amount of resource that is used. As the product becomes cheaper (because less of the resource is needed), demand for the product rises enough to offset the smaller amount of resource per product.
It’s entirely possible that this “paradox” is relevant here, with programmer labor as the resource in question. The cloud, AI, and other “technology surges” have made programmers more efficient – allowing them to produce more product per unit labor, and thus to sell the product for a lower price (frequently free these days!). This has in turn increased demand for software – perhaps enough to entirely offset the lesser number of programmers needed to make a particular unit of software.
I don’t think so. If Artificial Ignorance was actually capable of generating functional code with some reliability, then yes. But it is not and it will not be anytime soon because that requires some actual understanding and AI does not have that and may never have that. The approaches known today will certainly never get there. Statistical classifiers can have an incredible width of view, but they can never do more than scratch the surface. No approach “trained” on data can as the amount of data needed incr
Fewer programmers. Greater margins. More profit. And completely imaginary.
In other news “VVet Dream” gets censored and replaced with Eet Fream. ROTFL.
One would expect that solving a given software problem would require fewer and fewer lines of code. For a while it looked like this would be the case. After all writing a simple program in dBase (yes it had a programming language) essentially just consists of defining forms for your data sets. The database itself would take care of all manipulations. The same was true for Delphi, which offered you ways for automatically generating forms which you could then edit.
One would think that such database applications today would be much simpler, but they aren’t. Instead people now build multi-layer systems where the functional is not only duplicated in the database and the GUI, but also a server layer in between.
If we follow the trends we see today, we will see people writing more and more code to solve the same problems, while adding more and more external dependencies which continuously get worse and worse. I mean we now have web apps, written with insane amounts for developer time… yet they barely are able to compete with mediocre desktop software form the 1990s.
I’m a pretty decent programmer. Good enough that I’ve made a career out of it and none of my code will (likely) ever make it to the Daily WTF. But there are programming concepts that I’ve always struggled to understand because frankly, the documentation is obtuse and hard to parse, and it wasn’t really worth my time.
For instance, the Wikipedia entry on monads is frankly just obnoxious to read. I program in elisp a bit, so trying to understand monads is mostly about satisfying some curiosity, but something about the article just doesn’t click with me and I have to move through it really slowly.
I asked ChatGPT to explain it to me in simple terms, and it did a good job. It even provided an example in JavaScript. Then I asked it to provide an example in elisp and it did that too. I’m not super concerned about correctness of the code, as long as it’s generally okay, and it seems to have done an okay job.
I’ve also asked it to document some elisp functions that I’ve always thought were poorly described (emacs’ documentation can really be hit or miss) and it really did a great job.
I’m not so arrogant as to say that these models won’t one day generate a lot of good, usable code, but I honestly think that this ability to collate a tonne of data and boil it down to something understandable could fill in the gaps in a lot of documentation. The longest, most tedious parts of my job very often boil down to research for some engine-specific feature that I need, or some sort of weird platform quirk. For publicly available engines like Unreal, this will honestly improve my productivity quite a lot.
This all reminds me of an interesting test of genetic algorithms programming FPGAs a few years ago. There were a few tests where the target FPGA did perform correctly according to the test conditions.
Then they copied the program into another FPGA, same type, and it failed miserably. Analysis was very difficult because the GA had ignored any standard conventions, but they found the problem finally. The GA had programmed chains of gates to act as analog resonators (against any specification of the chip) that
The level of bullshit and outright nonsense they have been publishing has become untenable. This is not an organization I want to continue to be associated with. A pity, really.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
CNN Political Commentator Predicts Bitcoin Rises to $103,000 in 2023
Seeking Exotic Remote Work Locations? More Than 40 Places Now Offer ‘Digital Nomad’ Visas
You will have many recoverable tape errors.

source