Say what you want about Siri but when “she” can’t find the answer, “she” will either admit to not knowing, refer you to a web link “she” thinks might be helpful or sometimes give a source her answer is based upon - one time she actually cited Wikipedia.
Sure, AI chat bots have their problems, but I’ve used ChatGPT to help me fix programming issues (I program in Go) and I’ve found the AI useful, if not always optimal. In particular, I’ve used ChatGPT to write unit tests pretty fast. But I understand these gadgets may not be everybody’s cup of tea.
I think decoding error messages and debugging code is probably one of the best possible use cases for AI. There are clear rules to check (things like syntax and error messages). In general there are few controversies, and there are lots of samples available for comparison. It's exactly the sort of thing you could assign to an intern with reasonable expectations of success.
As others have described it, AI is "mansplaining as a service" ;)
Say what you want about Siri but when “she” can’t find the answer, “she” will either admit to not knowing, refer you to a web link “she” thinks might be helpful or sometimes give a source her answer is based upon - one time she actually cited Wikipedia.
As a human accountant who has been accused of actively hallucinating, I take offense at AI potentially taking my place 😉
Sure, AI chat bots have their problems, but I’ve used ChatGPT to help me fix programming issues (I program in Go) and I’ve found the AI useful, if not always optimal. In particular, I’ve used ChatGPT to write unit tests pretty fast. But I understand these gadgets may not be everybody’s cup of tea.
I think decoding error messages and debugging code is probably one of the best possible use cases for AI. There are clear rules to check (things like syntax and error messages). In general there are few controversies, and there are lots of samples available for comparison. It's exactly the sort of thing you could assign to an intern with reasonable expectations of success.