Showing 1 - 10 of 63
This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding...
Persistent link: https://www.econbiz.de/10014437055
This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions...
Persistent link: https://www.econbiz.de/10011897939
Persistent link: https://www.econbiz.de/10011813074
This paper surveys the relevant existing literature that can help researchers and policy makers understand the drivers of competition in markets that constitute the provision of artificial intelligence products. The focus is on three broad markets: training data, input data, and AI predictions....
Persistent link: https://www.econbiz.de/10014512124
Persistent link: https://www.econbiz.de/10014310538
Persistent link: https://www.econbiz.de/10014296899
Persistent link: https://www.econbiz.de/10015079873
Persistent link: https://www.econbiz.de/10015079876
When AI prediction substantially resolves trial uncertainty, a party purchasing AI prediction will disclose it if it is in their favour and not otherwise, signalling the outcome to the other party. Thus, the trial outcome becomes common knowledge. However, this implies that the parties will...
Persistent link: https://www.econbiz.de/10014635648
This paper examines and finds that the answer is likely to be no. The environment examined starts with users who contribute based on their motives to create a public good. Their own actions determine the quality of that public good but also embed a free-rider problem. When AI is trained on that...
Persistent link: https://www.econbiz.de/10014635649