![](https://sh.itjust.works/pictrs/image/d6d748ee-ad58-496c-a059-75d92e724307.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Yes, absolutely. That is a concern that I too share, fellow meat being. We should be vigilant against superior, more capable, and really friendly artificial intelligences.
Linux server admin, MySQL/TSQL database admin, Python programmer, Linux gaming enthusiast and a forever GM.
Yes, absolutely. That is a concern that I too share, fellow meat being. We should be vigilant against superior, more capable, and really friendly artificial intelligences.
At every step in the process, it looked to those around me that whatever I was using was going to be used forever. I didn’t set any lofty goals
This is absolutely the right approach, even if you were planning to quit from the start (not the case with you, but still). “This is my last ever cigarette” just caused me to delay and delay and delay. The only realistic way to do it for me was one craving at a time (“I’m not smoking for the next hour”), then a day at a time. Handling the hours and days was hard, but once you do that the weeks and months take care of themselves.
Vaping for me was a major misstep. Just caused me to consume more nicotine than when I was smoking.
There’s two separate addictions going on with smoking: habit and chemical. What patches, nicotine gum, etc are trying to help people do is tackle them separately.
This means you can focus on getting out of the habit of lighting up after a coffee, or after a meal, or whatever triggers you had, while delaying the chemical withdrawal which seriously messes with your head until later. Tackling the two seperately is easier for many people.
With that said, patches don’t work for everyone, and I hope you find the cessation aid (if any) that works for you. Quitting smoking is an absolute bitch.
For me personally, the most helpful aid was nicotine gum, and then swapping out the nicotine gum for normal gum once I was confident I’d kicked the habit part and could focus on the chemical withdrawal.
The kitchen is operated by volunteers and rely on donations and food banks. I Believe this is also common practice in many temples within India proper.
Here’s a great little mini-documentary on that I saw on exactly that a few months back. Sikh temples seem amazing in terms of the sheer numbers of people they feed with no limiting criteria.
Lol, took me a minute to figure out you’re literally talking about a football match happening now. I was re-reading my comment thinking “Wait, what’s this got to do with Ukraine? Did the Romanian government do something that hit the news I don’t know about? What does this mean?!?” xD
Probably most countries think so of themselves.
Funnily enough, Romanians are the exact opposite in this regard. Romanians tend to think that Romania is terrible, backwards, and filled with awful people. That isn’t exactly the case (like any country, it has it’s pros and cons, and there’s a lot we need to work on) but it is how they tend to see it.
Even the question of “who” is a fascinating deep dive in and of itself. Consciousness as an emergent property implies that your gut microbiome is part of the “who” doing the thinking in the first place :))
So, first of all, thank you for the cogent attempt at responding. We may disagree, but I sincerely respect the effort you put into the comment.
The specific part that I thought seemed like a pretty big claim was that human brains are “simply” more complex neural networks and that the outputs are based strictly on training data.
Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation?
While true, this is way too reductive to be a one to one comparison with LLMs. Humans have genetic instinct and body-mind connection that isn’t cleanly mappable onto a neural network. For example, biologists are only just now scraping the surface of the link between the brain and the gut microbiome, which plays a much larger role on cognition than previously thought.
Another example where the brain = neural network model breaks down is the fact that the two hemispheres are much more separated than previously thought. So much so that some neuroscientists are saying that each person has, in effect, 2 different brains with 2 different personalities that communicate via the corpus callosum.
There’s many more examples I could bring up, but my core point is that the analogy of neural network = brain is just that, a simplistic analogy, on the same level as thinking about gravity only as “the force that pushes you downwards”.
To say that we fully understand the brain, to the point where we can even make a model of a mosquito’s brain (220,000 neurons), I think is mistaken. I’m not saying we’ll never understand the brain enough to attempt such a thing, I’m just saying that drawing a casual equivalence between mammalian brains and neural networks is woefully inadequate.
That’s a strong claim. Got an academic paper to back that up?
This is why I strictly refer to these things as LLMs. That’s what they are.
I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.
LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.
If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.
Honest question: isn’t OSx considered the OS of choice for video and music editing?
Linux from Scratch and Gentoo are also pathways to abilities some would call… unnatural
Start looking now. Tell prospective employers that you’re working on the certification and include it in your CV (as a work in progress, ofc). Job searches take a long time, and the sooner you start, the sooner you’re out.
Edit: @MrBobDobalina@lemmy.ml has exactly the correct approach for getting it in writing. Keep it professional, emotionless, as close to an accurate summary of the situation and the decisions made as possible.
Ah, I misunderstood then, sorry. But still, even with all the investment in the world, LLM is a bubble waiting to burst. I have a hunch we will see truly world-altering technology in the next ~20 years (the kind that’d put huge swathes of people out of work, as you describe), but this ain’t it.
There’s an upper ceiling on capability though, and we’re pretty close to it with LLMs. True artificial intelligence would change the world drastically, but LLMs aren’t the path to it.
I agree with everything you say here, but I thought the setup-payoff joke structure and the fact I intentionally swapped testing and production for comedic effect made it obvious enough. I guess Poe’s law strikes again.
Every software project, without exception, has a testing environment.
Some even have a separate production environment too.
So basically the Lemmy version of Subreddit Simulator, but allowing users as well?