Is this article an accurate reflection of peoples experience, or more generic LinkedIn click bait? I'm assuming the later with content like
>Substantial and ongoing improvements in AI’s capabilities, along with its broad applicability to a large fraction of the cognitive tasks in the economy and its ability to spur complementary innovations, offer the promise of significant improvements to productivity and implications for workforce dynamics.
I keep waiting for the industry shifting changes to materialise, or at least begin to materialise. I see promise with the coding tools, and personally find Claude and Cursor like tools to warrant some of the general hype, but when I look around for similar changes in other tangentially related roles I draw a blank. Some of the Microsoft meeting minute summaries are good, while the transcripts are abysmal. These are helpful, but not necessarily game changing.
Hallucinations, or even the risk of hallucinations, seem like a fundamental show stopper for some domains where this could otherwise be useful. Is this likely to be overcome in the near future? I'd assume it's a core area of research, but I know nothing of this area, so any insights would be enlightening.
What other domains are currently being uplifted in the same way as coding?
I'm pretty certain that one of the first things we'll see is more jobs recording worker activity (computer activity, calls, video recording) as training data for future automation. Data from teleoperation of robots would be especially useful for physical tasks.
Speculation can be enjoyable, but given the rapid pace of AI advancements, where today's capabilities may be obsolete within a year, it's wise to approach any claims with a healthy dose of skepticism.
Are any products using LLMs on the horizon, except for code completion?
I have been a power user, hoping my workflows would improve. About every workflow got slower with statistical AI, and I am back to using logical AI like wolframalpha and bayesians.
There are entire categories of saas and enterprise vendors that are about to be completely blown away.
For example -- not long ago, when you wanted to do l10n/i18n for your business, you'd have to go through a pretty painful process of integrating with eg translations.com. If you're running an ecommerce site with a lot of new products (and product descriptions) coming online quickly, that whole process would be painful and expensive.
Fast forward to today -- a well-crafted prompt to Llama3.1 within a product pipeline makes that vendor completely obsolete. Now, you could argue that this kind of automation isn't new, you could have done it with an api call to Google translate or something similar, and sure, that's possible, but now you have one single interface into a very broad, capable brain to carry out any number of tasks.
If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
That's not the state of today at all, and probably doesn't represent the near or medium future.
Using the unmonitored output of a LLM-translation service for your commercial content, outputting in languages you can't read, represents a big reduction in quality assurance and greatly increases the risk of brand embarrassment and possibly even product misrepresentation, while leaving you with no recourse to blame or shift liability.
> If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
The more likely future is that existing translation houses will increasingly turn to LLM-assistance to raise the efficiency and lower the skill threshold for their staff, who still deliver the actual key values of quality assurance and accountability. This will likely drive prices down and greatly reduce how many people are working as translators in these firms, but it's an opportunity for them, not a threat.
LLM's don't seem to be on track to be the foolproof end-user tools that the earyl hype promised. They don't let us magically do everything ourselves and (like crypto being imcompatible with necessary regulations), they don't offer all the other assurances that orgs need when they hire vendors. But they can very likely accelerate trained people in certain cases and still have an impact on industry through specialty vendors that build internal workflows around them.
You may be right, but I would approach this with an open mind. Whether the trajectory of AI development remains an asymptote to human intelligence or surpasses it entirely, the increasing investment, involvement of diverse stakeholders, and growing stakes suggest that virtually every job role may face disruption or, at the very least, re-evaluation.
This sums up my view on AI and machine autonomy in general. The human added value is accountability. For a similar reason, outsourcing to faceless off shore companies usually does not work out.
There is nothing to suggest that AI will not require an expert in the loop in the future. Every single one of these products has a disclaimer that it will produce false and misleading results.
Of course, there are only so many experts needed for a given problem domain to fulfill all the demand, but that is true even without automation.
As part of a hilariously bad set of actions by a corporation that I had to threaten with legal action, I decided to try seeing what ChatGPT had to say, knowing in advance all the problems with it in this field, and… it was pretty much what I expected — enough to be interesting and get the general vibe right, but basically every specific detail that I could look up independently without legal training of my own, were citations of things that didn't exist.
I'd just about trust them on language tutoring, but even then only on the best supported languages.
Use them as enthusiastic free interns-to-juniors depending on the field. At some point, they'll be better, but not in predictable ways or on a predictable schedule.
But they are pretty general in their abilities — not perfectly general, but pretty general — so when they can do any of what you've suggested to a standard where they can make those categories unemployable, they're likely to perform all other text-based (not language-based, has to be text but doesn't have to be words) economic activity to a similar standard.
AI has absolutely revolutionized spam and spam detection. Spammers can now generate absolutely unheard of amounts of complete bullshit. And on the other side, spam detection services and algorithms are getting better and better at detecting it, sorting it, and filtering it based on user preferences. Tons of people are enjoying openly AI generated content; and the content that isn't enjoyed by people is instead enjoyed by other AI bots, driving up the engagement rates. That behavior too though is being monitored by other AI, which then prompts spammers to improve their AI so they can avoid that AI and get their stuff seen by engagement AI.
So we have server farms full of computers that are making complete shit that is then thoroughly enjoyed by other server farms full of computers to drive up engagement numbers while still other server farms full of computers are working to detect the fraud and remove it.
Meanwhile, in the real world, we're still hurtling towards climate collapse. But that's okay, we're finally looking into building nuclear reactors again. To power more data centers.
AI is just one part of a larger and longstanding conversation about the future of work in an era of automation. We've long speculated that at some point we won't need the entire population to do all the work. Economists have talked about 20% of the population doing the work.
This can go one of two ways:
1. Fewer jobs will be used to further suppress wages. What little wages people earn will be used for essentially subsistence living. The extreme end of this is like the brick kiln workers in Pakistan, India and Bangladesh. A lot of people, myself included, call this neofeudalism because you will be a modern day serf. The welath concentration here will be even more extreme than it is now. We're also starting to see this play out in South Korea; or
2. The created wealth will elevate the lowest among us so work becomes not required but a bonus if you want extra. The key element here is the removal of the coercive element of capitalism.
To put this in perspective, total US corporate profits are rapidly approaching $4T per quarter. That's roughly $60,000 per US adult. Some would call that the exploited surplus labor value.
Here's another number: we've spent something like $10T on the War on Terror since 9/11. What could $10T buy? Quite literally everything in the United States of America other than the land.
What's depressing is that roughly half the country is championing and celebrating our neofeudalist future even though virtually none of them will benefit from it.
Is this article an accurate reflection of peoples experience, or more generic LinkedIn click bait? I'm assuming the later with content like
>Substantial and ongoing improvements in AI’s capabilities, along with its broad applicability to a large fraction of the cognitive tasks in the economy and its ability to spur complementary innovations, offer the promise of significant improvements to productivity and implications for workforce dynamics.
I keep waiting for the industry shifting changes to materialise, or at least begin to materialise. I see promise with the coding tools, and personally find Claude and Cursor like tools to warrant some of the general hype, but when I look around for similar changes in other tangentially related roles I draw a blank. Some of the Microsoft meeting minute summaries are good, while the transcripts are abysmal. These are helpful, but not necessarily game changing.
Hallucinations, or even the risk of hallucinations, seem like a fundamental show stopper for some domains where this could otherwise be useful. Is this likely to be overcome in the near future? I'd assume it's a core area of research, but I know nothing of this area, so any insights would be enlightening.
What other domains are currently being uplifted in the same way as coding?
I'm pretty certain that one of the first things we'll see is more jobs recording worker activity (computer activity, calls, video recording) as training data for future automation. Data from teleoperation of robots would be especially useful for physical tasks.
That data is valuable beyond just future training. You can automate a lot of management using that information.
Speculation can be enjoyable, but given the rapid pace of AI advancements, where today's capabilities may be obsolete within a year, it's wise to approach any claims with a healthy dose of skepticism.
Are any products using LLMs on the horizon, except for code completion? I have been a power user, hoping my workflows would improve. About every workflow got slower with statistical AI, and I am back to using logical AI like wolframalpha and bayesians.
There are entire categories of saas and enterprise vendors that are about to be completely blown away.
For example -- not long ago, when you wanted to do l10n/i18n for your business, you'd have to go through a pretty painful process of integrating with eg translations.com. If you're running an ecommerce site with a lot of new products (and product descriptions) coming online quickly, that whole process would be painful and expensive.
Fast forward to today -- a well-crafted prompt to Llama3.1 within a product pipeline makes that vendor completely obsolete. Now, you could argue that this kind of automation isn't new, you could have done it with an api call to Google translate or something similar, and sure, that's possible, but now you have one single interface into a very broad, capable brain to carry out any number of tasks.
If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
That's not the state of today at all, and probably doesn't represent the near or medium future.
Using the unmonitored output of a LLM-translation service for your commercial content, outputting in languages you can't read, represents a big reduction in quality assurance and greatly increases the risk of brand embarrassment and possibly even product misrepresentation, while leaving you with no recourse to blame or shift liability.
> If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
The more likely future is that existing translation houses will increasingly turn to LLM-assistance to raise the efficiency and lower the skill threshold for their staff, who still deliver the actual key values of quality assurance and accountability. This will likely drive prices down and greatly reduce how many people are working as translators in these firms, but it's an opportunity for them, not a threat.
LLM's don't seem to be on track to be the foolproof end-user tools that the earyl hype promised. They don't let us magically do everything ourselves and (like crypto being imcompatible with necessary regulations), they don't offer all the other assurances that orgs need when they hire vendors. But they can very likely accelerate trained people in certain cases and still have an impact on industry through specialty vendors that build internal workflows around them.
You may be right, but I would approach this with an open mind. Whether the trajectory of AI development remains an asymptote to human intelligence or surpasses it entirely, the increasing investment, involvement of diverse stakeholders, and growing stakes suggest that virtually every job role may face disruption or, at the very least, re-evaluation.
This sums up my view on AI and machine autonomy in general. The human added value is accountability. For a similar reason, outsourcing to faceless off shore companies usually does not work out.
There is nothing to suggest that AI will not require an expert in the loop in the future. Every single one of these products has a disclaimer that it will produce false and misleading results.
Of course, there are only so many experts needed for a given problem domain to fulfill all the demand, but that is true even without automation.
phone self-service systems, tutoring services, contract review, recruiting. Just to name a few
> contract review
Yeah, no.
As part of a hilariously bad set of actions by a corporation that I had to threaten with legal action, I decided to try seeing what ChatGPT had to say, knowing in advance all the problems with it in this field, and… it was pretty much what I expected — enough to be interesting and get the general vibe right, but basically every specific detail that I could look up independently without legal training of my own, were citations of things that didn't exist.
I'd just about trust them on language tutoring, but even then only on the best supported languages.
Use them as enthusiastic free interns-to-juniors depending on the field. At some point, they'll be better, but not in predictable ways or on a predictable schedule.
But they are pretty general in their abilities — not perfectly general, but pretty general — so when they can do any of what you've suggested to a standard where they can make those categories unemployable, they're likely to perform all other text-based (not language-based, has to be text but doesn't have to be words) economic activity to a similar standard.
One word: spam.
AI has absolutely revolutionized spam and spam detection. Spammers can now generate absolutely unheard of amounts of complete bullshit. And on the other side, spam detection services and algorithms are getting better and better at detecting it, sorting it, and filtering it based on user preferences. Tons of people are enjoying openly AI generated content; and the content that isn't enjoyed by people is instead enjoyed by other AI bots, driving up the engagement rates. That behavior too though is being monitored by other AI, which then prompts spammers to improve their AI so they can avoid that AI and get their stuff seen by engagement AI.
So we have server farms full of computers that are making complete shit that is then thoroughly enjoyed by other server farms full of computers to drive up engagement numbers while still other server farms full of computers are working to detect the fraud and remove it.
Meanwhile, in the real world, we're still hurtling towards climate collapse. But that's okay, we're finally looking into building nuclear reactors again. To power more data centers.
The future is fucking stupid.
AI is just one part of a larger and longstanding conversation about the future of work in an era of automation. We've long speculated that at some point we won't need the entire population to do all the work. Economists have talked about 20% of the population doing the work.
This can go one of two ways:
1. Fewer jobs will be used to further suppress wages. What little wages people earn will be used for essentially subsistence living. The extreme end of this is like the brick kiln workers in Pakistan, India and Bangladesh. A lot of people, myself included, call this neofeudalism because you will be a modern day serf. The welath concentration here will be even more extreme than it is now. We're also starting to see this play out in South Korea; or
2. The created wealth will elevate the lowest among us so work becomes not required but a bonus if you want extra. The key element here is the removal of the coercive element of capitalism.
To put this in perspective, total US corporate profits are rapidly approaching $4T per quarter. That's roughly $60,000 per US adult. Some would call that the exploited surplus labor value.
Here's another number: we've spent something like $10T on the War on Terror since 9/11. What could $10T buy? Quite literally everything in the United States of America other than the land.
What's depressing is that roughly half the country is championing and celebrating our neofeudalist future even though virtually none of them will benefit from it.