Dataset Viewer
Auto-converted to Parquet
text
stringlengths
1
41.2k
label
stringclasses
128 values
dataType
stringclasses
2 values
communityName
stringclasses
128 values
datetime
stringdate
2015-02-24 00:00:00
2025-04-26 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
476
<!--no-repeat fixed center center-->
r/airesearch
post
r/airesearch
2015-02-24
Z0FBQUFBQm9Delo1NlhHZTZhUVdial9CejRaajgwcC14OG1PMUdKZ3N4aUdNc2dzZ2VmejRYT2MtSk5DeVVSTkpncm1ZTHlDa013UnBDNnhwUU9jMnB5Ti1sX2gtQV9QelE9PQ==
Z0FBQUFBQm9DemFzcFJDM25rNnVDR3JlV1V3SlJlcllxNHMzU093cXF2UG5ZUnNGcVJFcjlCTUk1TzJIYUhFdDBiNE1XekIxejF2Tkg3LVJTVkNBUWNEaHJ4SHNTb2w2aDBpY19jZW4tOWVVbElpMHpVVWltTkROR1FMaUV1MllNNi1xSWN2ZzJjOXJXbkVmby10Z0FKakdUZHlqd21BZ3BwTHh1S1hQZHVHWEFOZDdkaFNRMDd3PQ==
Everyone's heard of the debate about what cars should do in cases where they have to choose between harming the driver and harming pedestrians or other drivers. However, in a [recent AI conference](https://www.reddit.com/r/ControlProblem/comments/4qndcn/notes_on_the_safety_in_artificial_intelligence/), one of the speakers brought up a case where a vehicle would have to decide whether to swerve to avoid a deer. Swerving has a 1 in 10 million chance of killing the human and a 1% chance of killing the deer; driving straight has a 1 in a million chance of killing the human and a 75% chance of killing the deer: https://i.imgur.com/KLhvmb6.jpg You may disagree with the specific numbers but how do you think these situations should be handled? Apparently, people are irrationally protective of animals when they see them in the road, and perhaps the role of autopilots could be to override the human instinct to avoid hitting animals (at excessive risk of personal danger). After all, people have no problem killing animals just for pleasure/food, which is inconsistent with how they react to animals in the road.
r/aiethics
post
r/AIethics
2016-07-01
Z0FBQUFBQm9Delo1ZzhxTlZvNVVTYTJPU3JuM0VwU2YxUGxSRU9aZjBub3R0Z0dpX0Rkdnl6VWVkeU1JMzJlYmZ6dlFVRzg3UWd3WUxDOE5NaE1VVGY5SlZza1VmSV91RGc9PQ==
Z0FBQUFBQm9DemFzRm9kd0RKYkM2cklFc1llYnZUSFJXVTg1dUFNeUROSjRHM0dBdFFGeER2ZnZXYlJIUGZRT3d5a0xhOEgwSWFpS2NnQWpTMi0tb2FvLVJfT2FEVDUzWjY1bWFqYVZfWUdEVm9Xc24zTnpuVUhBRjRzbzZmbUdFLWlfUFNZYmoxWXR5LUJWelRhSWIzTlRIX3NrenV5N0s1aUc5ZkJKWFhFQVd4WWx2YXdkSHFBOE8wejlZcGw5cU1TTjZrWlQ4NFNU
If I add a constant c to my loss function Loss_new = loss + c, then the gradient with respect to the loss and hence the learning is unchanged. However I'm wondering if it's really ethical to add a positive constant. Won't the net feel better about itself if it has a lower loss? Perhaps it would be most ethical if I always subtract 999.999 from my loss function. That way the neural network will be motivated by a sense of self improvement and ambition rather than a fear of failure.
r/aiethics
post
r/AIethics
2016-07-02
Z0FBQUFBQm9Delo1MUE5ak1oS1UyOW5Ib3QzakgyUEM2eUYyTHkzWFZXbk1DeTNkRzljVjBtMXh0a0ExQmhENU90U3RUdXN2alpGQzQzYk1WZnZveTFTS0V1VUI1VVExMUE9PQ==
Z0FBQUFBQm9DemFzNmMtbjZURmNSbkZqZUlSOEtXeGRDcERrSDFsaUJNVVlWV1JoUGJmN0lSMHBhN3h3UjFhZVNJMlVYZ01HclF4X29yaWhFQV9NdU15RHdHSnRqVkZXejVod2xPTnlGU24wT182ajV5NUxCSFhoa0JBZDZSd3RUZG5HbGdudF9odUVIX1gzV0dITkhJSXVleHF2ZTR0RmI4RG4wMWJ6enA4eDNVcXAtRGNOaUJXSkgxUE9lQ1JVLVgzczB6MVZhSm9VZXdWZTQzUGNSQWFzVUlDdEpqYXF1UT09
If you try to develop an approach to machine ethics, there's two main paths you can take. The first is classical ethics, which means hard-coding moral frameworks into an agent. It could be utilitarianism, deontological ethics, something else entirely, or a combination. The advantages of doing this are that it guarantees conformity to robust moral principles and it gives humans perfect understanding of how the agent makes decisions. However, people often don't like this idea because there is no consensus on morality between philosophers and because formalizing morality can be difficult. The other approach is intuitionist. Researchers have sketched out plans to use human cognitive models and training datasets of moral behavior to create AIs which make moral decisions in ways that mirror human intuitions. The claim behind this is often that it avoids moral disagreement, and since we can all agree on some examples of morally good behavior, we can just let the AI learn our values from them. What I disagree with is the notion that encoding traditional morality is somehow a problem merely because philosophers don't agree. Why should the disagreements between philosophers stop us from building AIs rooted in classical morality, as long as we use rigorous, plausible moral theories which are actively defended by at least some of the experts in the field? Why not allow developers and engineers to imbue machines with any of those moral theories that they want and then just see what happens?
r/aiethics
post
r/AIethics
2016-07-04
Z0FBQUFBQm9Delo1cXNzZUJ5R29wRTJWX3NsNUs3WWZFNDFYdUpULURRY1ZsNzdySDRESmFzV0k1aEZjUUlXWkdYMXZVWUx4MU9vQlFmYTQ4Y3hPRVl3X0hVM2gyd1owU1E9PQ==
Z0FBQUFBQm9DemFzMFYzN2Y2NFp0R3I5MmJzaV9wRzhyY1NiUm1icnljS1J1VC1GOEdwTnNnUkdubk9HRE5GWE4tUmpDS0JQOWZXajlEUmR0VWRiUnRuS2c5WVlid1hleVRfZU1neHVPWEo5ems5eTRqT0RfUk5tVnlhSkxISzQ3NUVuNU9zUU5rMkpHd20yWVJ4WFRVcVRKSXFNYXpoc2hzdnEwNjduNmZkc1UyUlNOdWNKSFFZVlBIOTd2bF9wVE5ZbWdIM2RUTU9B
Some thinkers believe that machine intelligence will arrive at some point, but rather than *autonomous AGIs*, we'll get either [mind uploads](http://mason.gmu.edu/~rhanson/uploads.html) or [enhanced human intelligence](http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2) via e.g. brain-computer or brain-nanobot interfaces. I've heard some arguments that if those paths are likely (I find them at least plausible), then there's little reason to be too concerned about AI ethics / control problem / AI-safety: the human elements will inject the necessary ethics into the future cyborg/"em" overlords. This seems very wrong to me. Malice, error, destructive behavior are all common amongst humans. If the future enhanced humans are less vulnerable than us and/or more powerful, than we potentially are handing any clever/malicious hacker/troll every nuclear bomb in the world and the Ring of Gyges. In other words: there will likely be some bad actors, and some bad actors may find ways to control/harm others in radical ways that are impossible today due to the limited faculties of even the most advanced human. Rather, it seems that we'd need the same AI-safety that was already motivated by the autonomous-AGI case, and make sure that it works provably even when interfaced with a brain or uploaded mind. But is this a plausible scenario?
r/aiethics
post
r/AIethics
2016-07-07
Z0FBQUFBQm9Delo1UUk0ZExOVFplVmtGaUp3VU9TM1hVRS1rZDFpTFQwNHBiZ0tleXN0QzRzREtxeUJaTmY5c3FFUU9aUTNtUDBpVGluSzN2cFlTaGtoemlsWHlYaXYwaUE9PQ==
Z0FBQUFBQm9DemFzREdsMDVpSU9FWUFfN2hsVnpEQVZ1SEZpbWZBcW5EWXpCNUM2dTZab1R0RU5lRC0xcWZUWC05bTJTQmFRS2E5SHFCQmZPWHc4Y2VQekttcjFQWXFRdTlscWNCcVk0S252di1BVWNDb05lM2ZUY1FiU2lvUWNod0RlS1pTLVNGT3I5UTJvQUhyRTZLZElrY1gzSEh4aUo1Sm9jZ0lXdFRnYmg0Rk5nQVBncnNrPQ==
This is an overview of technical readings in machine ethics (developing moral frameworks for autonomous systems). I have less familiarity with other topics in AI ethics and have not done a review of the literature in those other fields, so I'm not making a reading list for all that at the moment. **Papers** Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. http://commonsenseatheism.com/wp-content/uploads/2009/08/Allen-Prolegomena-to-any-future-artificial-moral-agent.pdf Anderson, M., Anderson, S. L., & Armen, C. (n.d.). Towards Machine Ethics: Implementing Two Action-Based Ethical Theories. https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-001.pdf Arkoudas, K., Bringsjord, S., Bello, P. (2005). Toward ethical robots via mechanized deontic logic. https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-003.pdf Armstrong, S. (2015). Motivated Value Selection for Artificial Agents. https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10183/10126 Bello, P., & Bringsjord, S. (2013). On How to Build a Moral Machine. https://doi.org/10.1007/s11245-012-9129-8 Bendel, O. (2013). Considerations about the relationship between animal and machine ethics. http://doi.org/10.1007/s00146-013-0526-3 Goodall, A. N. J. (2014). Machine Ethics and Automated Vehicles. http://people.virginia.edu/~njg2q/machineethics.pdf Grau, C. (n.d.). There is no “I” in “Robot”: Robotic Utilitarians and Utilitarian Robots. https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-007.pdf Lokhorst, G. J. C. (2011). Computational meta-ethics towards the meta-ethical robot. https://doi.org/10.1007/s11023-011-9229-z ([erratum](https://www.researchgate.net/publication/263369942_Erratum_to_Computational_Meta-Ethics_Towards_the_Meta-Ethical_Robot)) Muntean, I. & Howard, D. (2016). A minimalist model of the artificial autonomous moral agent (AAMA). https://www.aaai.org/ocs/index.php/SSS/SSS16/paper/download/12760/11954 Oesterheld, C. (2015). Formalizing preference utilitarianism in physical world models. https://doi.org/10.1007/s11229-015-0883-1 Pereira, L. M., & Saptawijaya, A. (2009). Modelling morality with prospective logic. https://doi.org/10.1504/IJRIS.2009.028020 Powers, T. M. (n.d.). Deontological Machine Ethics. https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-012.pdf Powers, T. M. (n.d.). Prospects for a Smithian Machine. http://www.iacap.org/proceedings_IACAP13/paper_52.pdf Shulman, C., Tarleton, N., & Jonsson, H. 2009. Which Consequentialism? Machine Ethics and Moral Divergence. https://intelligence.org/files/WhichConsequentialism.pdf Tarleton, N. (2010). Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics. https://intelligence.org/files/CEV-MachineEthics.pdf White, J. (n.d.). Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics. White, J. (n.d.). A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model. https://www.academia.edu/7000519/Autonomous_Reboot_the_challenges_of_artificial_moral_agency_and_the_ends_of_Machine_Ethics Wiltshire, T. J. (2015). A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans. https://doi.org/10.1007/s11023-015-9361-2 **Books** Wallach, W. & Allen, C. Moral Machines: Teaching Robots Right from Wrong. https://www.amazon.com/Moral-Machines-Teaching-Robots-Right/dp/0199737975 **Encyclopedia Articles** McNamara, P. "Deontic Logic", The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2014/entries/logic-deontic Portoraro, F. "Automated Reasoning", The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2014/entries/reasoning-automated/ --- I'm sure I've missed some so feel free to suggest additions.
r/aiethics
post
r/AIethics
2016-08-17
Z0FBQUFBQm9Delo1OVUycV83MVR1UVZKUFB0a0tFckh0RDdSWFRRM1EzYnoyclF5S0NFcDJKTXV3cGJBeVdSVXJNMjh6ODN1WlRseGV4aFFiWXU1Z09LS1Y5VkRTRU9kcUE9PQ==
Z0FBQUFBQm9DemFzTVZ3RHN6Q1RBeGRJb1Z3T3Q4aUQwZFZSclFRNXJlaDZtSnFtd3ZxU29hNFEwRzktTXdidE5hQTRGWmdZR3JWUDFPdnp2V2t0cGpISDBndnpWNVk1XzR0MFctNTVid3JqTDFuS3R2bE1BWlNod0tscUdSQXMzSkpla3V3TVhKWDJBTkplUWF1cGRJTVlELVFxcDJ1TmtSQTVGVS15dmFNYk52eEk3R1ZiUEE0PQ==
I'm finding the subject area of algorithmic fairness very interesting. I don't know if there's a substantial archive of articles about the subject I should check out though. Anywhere to start in particular?
r/aiethics
post
r/AIethics
2016-10-01
Z0FBQUFBQm9Delo1U1RYSDBCU1MyMHctc25HN0VjSkpMUld0SXVHMDMtUWdROVdfVzc0czFhVVdwQ0dYVDhyQ1hVUHdOSFZBUFMyckJ2YXZvX1VSSnF6b25IZlhCZktzelE9PQ==
Z0FBQUFBQm9DemFzYURPX3ZWbjJEaGVGOHJPX3hkNEs3ZTZFdm5qTFJHcVl3VWYydy1DMHgtUFN3NXhIUXRvcWNlT190c1o2X1EyVW04Z29UQVNEUTM5clczS3RaalNkMFNwcGdNYlZSWWlJbWhTdnNFb0tiRHNNZG96WGtHdW9XbnIwTGVDTTFZenZZbVBPRkU1eTBwbWU0MDFZQkhyOGVIRUc4OGkxY08ydFdXc2kzSUNMUjQ5QjVEMFpBQXRGdWVfQ2EzdnJFNXlf
I'm quite stunned at this -- two days ago I was actually planning to call quits on the subreddit and label it dead, no joke -- but we received a lot of attention yesterday after I [posted](https://www.reddit.com/r/Futurology/comments/55an2u/the_map_of_ai_ethical_issues/) an infographic to r/futurology. Everyone likes the intersection of science fiction with reality, and with self driving cars we're already seeing the need for ethical values to be implicitly encoded into machines. I wish we had a good "introductory article" for newcomers who just want to start learning about the philosophy of AI and robotics but none could encapsulate everything; we are here to discuss a very wide set of issues and topics that humanity will have to address in the future. Trending thread is here: https://www.reddit.com/r/trendingsubreddits/comments/55ha4r/trending_subreddits_for_20161002_rmedia_criticism/
r/aiethics
post
r/AIethics
2016-10-02
Z0FBQUFBQm9Delo1NmRWTTJ3STJCamZWSnBxVGNQR2FKZV9ONDJwVFoxZWd4OTB6b2xPNzNyQXhudkZ3YXk4blc2NGpOekxQSWgzMC03R0tQVnRLcW9kZEdzZkY1dG0yQXc9PQ==
Z0FBQUFBQm9DemFzRW9seWRqZ3dsei1waEttNTRpOVo3RHlWWnJQd003VU9KYVJRNkR2UjMyTENRYTVwOUJCYTJzQmN0NklnaDl1U3lHS2prZEtkNEpjaEVkaFVncVMzNmpYbkNwN1A1VVUwNGVaM1NoT1diQ3ViRk5mdURTZnB5Z3E3Q2FnSElJR1N3ejRid2M2UkhCc2lTc0hDNWVpUFQxdV92NUdvUWVsazllamlMR0ZrMW5rPQ==
This weekend I attended the [Ethics of Artificial Intelligence conference](https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/) at NYU. There were a ton of high-profile and interesting people there from philosophy (David Chalmers, Peter Railton, Nick Bostrom, Thomas Nagel, Paul Boghossian, Frances Kamm, Wendell Wallach) and science (Yann LeCun, Stuart Russell, Stephen Wolfram, Max Tegmark, Francesca Rossi) as well as Eliezer Yudkowsky. There were two fairly long days of talks and panels. David Chalmers (famous for his philosophy of mind and consciousness) did not officially speak but acted as chair for the event. He outlined the philosophy of the conference, which was to discuss both short and long term issues in AI ethics without worrying about either detracting from the other. He was, as usual, extremely awesome. Here is a summary of the event with the most interesting points made by the speakers. **Day One** The first block of talks on Friday was an overview of general issues related to artificial intelligence. Nick Bostrom, author of *Superintelligence* and head of the Future of Humanity Institute, started with something of a barrage of all the general ideas and things he's come up with. He floated the idea that perhaps we shouldn't program AI systems to be maximally moral, for we don't know what the true morality looks like, and what if it turns out that such a directive would lead to humans being punished, or something else that was pathological or downright weird? He also described three principles for how we should treat AIs: substrate nondiscrimination (moral status does not depend on the kind of hardware/wetware you run on), ontogeny nondiscrimination (moral status does not depend on how you were created), and subjective time (moral value exists relative to subjectively experienced time rather than objective time, so if a mind ran at a fast clock speed its life would be more important, all other things being equal). He pointed out that AI moral status could arise before they reach there is any such thing as human level AI - just like animals have moral status despite being much simpler than humans. He mentioned the possibility of a Malthusian catastrophe from unlimited digital reproduction as well as the possibility for vote manipulation through agent duplication, and how we'll need to prevent these two things. He voiced support for meta level decisionmaking - a ['moral parliament'](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html) where we imagine moral theories sending 'delegates' to compromise over contentious issues. Such a system could also accommodate other values and interests besides moral theories. He answered the question of "what is humanity most likely to fail at?" with a qualified choice of 'mind crime' committed against advanced AIs. Humans already have difficulty with empathy towards animals when they exist on farms or in the wild, but AI would not necessarily have the basic biological features which incline us to be empathetic at all towards animals. Some robots attract empathetic attention from humans, but many invisible automated processes are much harder for people to feel empathetic towards. Virginia Dignum was next; she is at the Delft University of Technology and spoke about mechanisms for automated processes to make decisions. She specified four methods of decisionmaking based on whether decisions are taken deliberately or imposed upon a system and whether the decisions are made internally or externally. The two former features lead to algorithmic decisionmaking in machines; the latter two lead to imposed decisions predetermined by regulatory institutions. Deliberated external decisionmaking means there is a 'human in the loop' and internal imposed decisionmaking is essentially randomness. Yann LeCun concluded this section with a pretty fantastic overview of deep learning methods and the limitations which stand in the way of progress in machine intelligence. He pointed out that reinforcement learning is a rare and narrow slice of the field today and that the greatest obstacles for machines include common sense judgements and abstraction. The biggest current problem for AI is unsupervised learning, which is having machines that can learn to classify things on their own without being given clearly labelled data from humans. He showcased some of the (very cool) features of adversarial learning which are being used to tackle this. He expressed support for the orthogonality thesis, namely the idea that intelligence and morality are 'orthogonal' - just because an agent is very smart doesn't mean that it's necessarily moral. He believes we should build a few basic drives into AIs: do not hurt humans, interact with humans, crave positive feedback from trusted human trainers. He also described a couple of reasons for why he is not concerned about uncontrolled advanced artificial intelligence. One was that he is confident that objective functions can be specified in such a way as to make machines indifferent to being switched off, and the other is that a narrow-AI focused on eliminating an unfriendly general-AI would 'win' due to its specialization. In Q&A, Stuart Russell objected to LeCun's confidence in machines being indifferent to being shut off based on the fact that self-preservation as a goal implicitly falls out of whatever other goals a machine has. Paul Boghossian objected to the 'behaviorist' nature of the speakers' points of view, saying that they were exempting consciousness from its proper role in these discussions. One person asked whether we should let AIs take charge of everything and supersede humanity - Bostrom pointed out that the space of possible futures is "an enormous Petri dish" which we don't understand; an AI future could materialize as a planet sized supercomputer with no moral status, and we will need to learn how to engineer friendly advanced AI systems no matter what the plan is. The rest of the Friday talks were devoted to near-future issues with specific AI systems. Peter Asaro started with an overview of his organization, the 'Campaign to Stop Killer Robots'. He stated that targeting and killing should remain human-controlled actions. While he acknowledged that automated weaponry could result in fewer casualties on the battlefield, he believed that it was too narrow a view of the consequences. He said that it's not straightforward to translate complicated battlefield morality questions for machines to understand, and is worried about unintended initiation and escalation of conflicts through automated systems, arms races, and threats to humanitarian law. He also believes that people should only be killed with 'dignity' and that doing it with a robot robs people of this. Therefore, he called for a clear and strong norm against automated weapons. Kate Devlin of the University of London gave a brief overview of the ethics of artificial sexuality. Looking at the history of sexualized robots featured in fictional media, she noted that almost all of them are female. Today there is a "Campaign Against Sex Robots" which is based on the idea that sexual robots would lead to the objectification of women. Devlin does not agree as she thinks it is too early to ban the technology and that we should explore it before thinking about banning it, especially since it does not really harm anyone. Instead she wants us to think about how to develop it correctly. There are many potential uses for these types of robots ranging all the way to the therapeutic; many of the rudimentary ones being sold today are bought by people who are incapable of forming ordinary relationships for various reasons. VR is being used in arousal tests to gauge the efficacy of treatments against pedophilia. She noted that gender issues have arisen in technology already; the history of gendered technology includes pacemakers originally designed only for men and phones too large for women's pockets. We should get into AI now to make sure that it is not designed in problematic ways. She mentioned privacy concerns, as the manufactures of the female stimulator WeVibe have already been sued over concerns that they were not properly informing customers of their collection of data from the devices. She wondered if we will ever get to a stage where a robot might have some knowledge of its role and refuse to give consent to its use, and if transmission/duplication of data and code between machines could serve as some form of digital sexual reproduction. Vasant Dhar of NYU spoke next about data and privacy in the era of autonomous vehicles. He said that our legal and financial liability institutions are based on outdated notions of data and that they fail to address liability and crime. However, the tools we have now even in ordinary cars for recording data can be used to improve insurance and judicial systems. He proposed black boxes for cars that would contain all relevant data to determine fault in the event of accidents, and said that customers should have the choice to share their driving data with insurance companies to get lower premiums. Dhar reiterated the importance of improving vehicle safety through autonomous driving; each percentage point reduction in vehicle accidents equates to 400 deaths and 40,000 injuries avoided every year. Adam Kolber followed up with a discussion of whether "the code is the law", based on the case study of [The DAO](https://en.wikipedia.org/wiki/The_DAO_%28organization%29) which was an automated capital fund which was subjected to a $50 million loss through exploitation. The answer apparently is that the code should not be the law, even though many people seemed to accept that it was. Steve Wolfram of WolframAlpha and Mathematica fame discussed the issues of computer languages and goal specification. He said that his life work has essentially been about trying to find ways for humans to specify their goals to machines, and that this can work for ethics as well as for math. He doesn't think that any single moral theory is likely to work for guiding artificial intelligence, apparently because of Godel's theorem and the incompleteness of computational languages. Francesca Rossi of IBM argued that for AIs and humans to interact very productively we will have to embed them in environments, so that rather than picking up a tool like a laptop or a phone, we are interacting with artificial systems all around is in our rooms and spaces. Humans will be recognized by their environments and our needs and wants will be inferred or asked about. AI embedded in environments can have memories about humans to better serve their interests. Most of all, we will need to establish trust between humans and AIs. Peter Railton, philosopher at the University of Michigan, attacked the subjects of orthogonality and value learning. He said that we can't simply tell AIs to do what we want because our wants and values require critical assessment. He said that the orthogonality thesis might be right, but as we increasingly interact with systems and allow them to participate in our own lives and decisionmaking, the question of what it would take for them to be intelligent might involve certain features relevant to morality. He stated that AIs should be thought of as social creatures; as a simple model, self regulation in a Hobbesian social contract leads to constraints and respect derived from self preservation. A society of intelligent cooperators can resist aggression and malice, and being moral is more efficient for a community than being cunning. From these principles we have a recipe for building proto-moral agents. He discussed the 'moral point of view' required for many strong ethical theories such as Kantian ethics and consequentialism: it requires agents to have a hierarchical, non-perspectival, modal/planning-oriented, and consistent view of the world which assigns intrinsic moral weight to things. He described how all these features are also part of the process of becoming generally intelligent in the first place, implying that general social intelligence ensures the necessary information required for moral decisionmaking. In the path towards functional moral agents, we will have to build agents which can represent the goals of others and have them learn how to act in beneficial ways. So if we can build AIs that we can trust, then we are on a good path towards building artificial moral agents. In the Q&A, Eliezer Yudkowsky objected that in the long run the 'instrumental strategy' is not quite what you want because maximizing people's desires as they are explicitly revealed can lead to bad outcomes, and you have to have a view like coherent extrapolated volition which asks what people would really want. Russell objected that when an agent becomes sufficiently powerful, it has no need to cooperate anymore. Regina Rini of the NYU Center for Bioethics stated that the approaches to ethics so far described relied too much on the Western post-enlightenment view of ethics, which is a historical aberration, and excluded African, Chinese and other approaches to ethics. Railton stated that his scheme was grounded in basic empathy and not mediated by any higher order moral theory; Wolfram and Rossi said that no one ethical approach will work and AI will have to represent diverse values. **Day Two** Saturday was devoted to long term discussion of the future of advanced artificial intelligence. Stuart Russell, professor at UC Berkeley and head of the new Center for Human Compatible Artificial Intelligence, started with a basic overview of the control problem. He described the points made in Steve Omohundro's paper on convergent instrumental drives. He also had some pretty harsh words for the researchers in the AI community which have denied and rejected notions of the control problem without seriously engaging with the relevant literature. He had three simple ideas which he proposed to constitute the definition of 'provably beneficial' AI: maximizing values for humans is the system's only goal; the robot is initially uncertain about these goals, and the best source of information is human behavior. He referred to inverse reinforcement learning as a technique for machines to learn human preferences, and said that uncertainty provides an incentive for machines to learn, ask questions, and explore cautiously. His answer to the off-switch problem is to make robots unsure of their objectives, so that they assume that the human will switch the robot off if and only if it has a good reason to, and will therefore be complicit with the action. He said that the wireheading problem can be avoided if you construct the reward signal as information about the reward function rather than as a reward itself; this way, any hijacking of the reward signal makes it useless. He said that there is a strong economic incentive for value alignment, but humans are irrational, nasty, inconsistent, and weak-willed. The next speaker was Eliezer Yudkowsky of the Machine Intelligence Research Institute. Chalmers pointed out his role there as well as his side venture in Harry Potter fanfiction. Yudkowsky started [his talk](https://intelligence.org/nyu-talk/) by pointing out how the Terminator pictures in every media article about the control problem are inappropriate. The real analogy to be used is [Mickey Mouse as the Sorcerer's Apprentice in *Fantasia*.](https://www.youtube.com/watch?v=Ait_Fs6UQhQ) He said that the first difficulty of AI alignnment is that the utility functions we imagine are too simple, and the second difficulty is that maximizing the probability of achieving a given goal leads to pathological outcomes. He and MIRI are concerned with the nature of the goal of 'maximizing' and how to define goals in a way that avoids the problems of perverse instantiation. He said that the fears of AI being developed by some terrorist or rogue group were silly, as "ISIS is not developing convolutional neural nets." Instead the most powerful AI is likely to be developed by large groups in government, academia and industry. He claimed that the four central propositions which support the idea that AI is a very big problem are: the orthogonality thesis, instrumental convergence, capability gain (the speed at which advanced AI can make itself better), and alignment difficulty. He said the first two are logical matters of computer science that people always learn to accept when they reflect upon them, while the latter two are more controversial. The next talk was from Max Tegmark and Meia Chita-Tegmark. Max is a world-renowned physicist who helps run the Future of Life Institute, and Meia is a psychologist. They explained how physics and psychology provide useful tools for understanding artificial intelligence; physics tells us about computation and the constraints of the universe, and psychology tells us about the nature of well being, ways to debug the mind when reasoning about AI and methods to design psychomorphic AIs. Meia was the only speaker at the conference to discuss unemployment in any detail; she pointed out that retirement has only mixed effects on well being and that happiness comes from financial satisfaction and feelings of respect. She said that studying homemakers, part time workers and early retirees can tell us more about how an automated economy would affect people's well-being. Max checked off [a list of common myths](http://futureoflife.org/background/aimyths/) regarding advanced AI. Meia said that we should look at the cognitive biases which have led to these misconceptions (such as availability bias leading to people worrying about robots rather than invisible artificial intelligence) and figure out how to avoid similar bugs from inhibiting our thinking in the future. By the way, Max Tegmark is very cool, he has a sort of old-rocker-dude vibe, and he and Meia are super cute together. Wendell Wallach of Yale spoke next. He is the man who quite literally wrote [the book](https://www.amazon.com/Moral-Machines-Teaching-Robots-Right/dp/0199737975) on AI ethics. He distinguished top-down approaches of formally specifying AI behaviors from bottom-up approaches of value learning. He said that neither will be sufficient on its own and that both have important roles to play. He is worried that AI engineers will make simplistic assumptions about AI, such as the idea that every decision should be utilitarian or the idea that 'ethics' and 'morality' are icky concepts that can be ignored. Steve Petersen, a philosopher at the University of Niagara, gave the next talk, based on the draft of a forthcoming paper of his. He aims to push back against the orthogonality thesis and modulate the level of the risk assessment provided by Bostrom. His argument is that designing AI to follow any complex goal will necessarily require it to be able to learn the values of its "teleological ancestors" (the original human designers or the previous iterations of AI before it self-improved or self-modified) and arrive at a state of coherence between goals. As agents replicate, self-modify and merge in the digital world, there can be no fact of the matter about which agents are the same or different; instead there will be an 'agential soup' unified by a common teleological thread originating with the designers. Coherence reasoning leads to impartial reasoning with the goals of other agents. There were several responses to him in Q&A. Yudkowsky's objection was that reaching coherence requires a meta-preference framework with particular assumptions about the universe and ontology; therefore, for any goal, there are many preference frameworks which could fulfill it, many of which would be perverse. Russell said that just coherence is not enough because you need the systems to give special weight to humans. Max Tegmark said that the problem was the vagueness of humanity's final goals. Chalmers pointed out that the orthogonality thesis still allows for all kinds of correlations between between intelligence and morality, as long as they are not necessary by design. Petersen said that he is arguing for 'attractor basins' in the possibility space of AI minds. Interestingly, he was motivated to start his research by the [Dylan Matthews Vox article](http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai) on effective altruism where Dylan thought that effective altruists shouldn't be concerned by artificial intelligence. Petersen doesn't think that AI is unimportant and thinks that Bostrom and Yudkowsky's work is valuable, but he wanted to get a more critical assessment of the level of risk when he learned that alternative altruistic projects were at stake. Matthew Liao of the NYU Center for Bioethics gave an argument for moral status on the basis of capabilities - that an entity is morally valuable to the extent that it has the physical/genetic basis for achieving features of moral relevance. I did not get a chance to ask him if this would imply that a 'seed AI' could be the most morally valuable entity in the world. He did argue against the ideas that level of intelligence or degree of moral agency determine moral status, as we don't normally think that smarter or more benevolent humans are more morally valuable than others. Liao argued that moral theories are too specific and too high level to be generally implemented in AIs. Instead, AI will need a universal moral grammar in which to specify morality. The holy grail is to develop machines that understand why things are right or wrong. Eric Schwitzgebel and Mara Garza of UC Riverside argued for basic principles of AI rights. They introduced a very weak "no-relevant-difference" argument: the idea that there are possible AIs which have the same morally relevant features that humans do and therefore there are possible AIs with equal value to humans. They questioned if cheerfully suicidal or hardworking AI is acceptable, and stated a 'self respect principle': that human grade AI should be designed with an appropriate appreciation of its own value. John Basl and Ronald Sandler of Northeastern University argued for AI research committees to approve or deny research in cases where AI subjects might be harmed. They said it would not be very different from cases like animal testing where we have similar review boards, and sketched out details of how the proposal would work. Daniel Kahneman, one of the most famous behavioral economists in the world, made something of a surprise appearance in the final panel. He said that we should take intuitions about case studies like the trolley problem seriously, as that is how the public will think about these events, for better or for worse. He said that no matter how AI cars kill people, it will be perceived with horror whenever the first incident happens, and we should prepare for that. Intuitions depend on irrelevant factors and will especially depend on whether AIs are designed to resemble us or not. Gary Marcus, professor of psychology at NYU, of gave a much needed presentation about the nature of intelligence. The previous talks in this discussion had mostly assumed that intelligence was one-dimensional and simple and that there was some fixed idea of 'human-level' AI which we could eventually reach. Of course this is a ridiculous oversimplification; intelligence is multidimensional and it is more about implementing a combination of various cognitive tools, some of which are already stronger in AIs than in humans. AIs can be better or worse than us in various domains, so we really have no idea where AIs will be in this multidimensional space. AIs could in fact be better than us at moral reasoning. He also emphasized the gap is between machine learning today and what human reasoning can do. Susan Schneider of Marquette University, a philosopher who has written quite a bit about AI and superintelligence, went over various issues. She argued that mind uploads might constitute death of the individual as long as we don't prove certain ideas about consciousness and personal identity, and also claimed that designing an intelligent and morally valuable robot to serve the interests of its creators would constitute slavery. Jaan Tallinn, founder of Skype, also gave a quick talk. He has been a strong financial backer for MIRI and other efforts in this space, and simply expressed his belief in the importance of the issue and his happiness at the success of the conference and the number of students who were interested in pursuing the topic. There was some final banter about the nature of consciousness which David Chalmers sat through very passively. Yudkowsky expressed optimism that one day we will have an explanation of consciousness which clears up our confusion on the matter. Nagel said that we will need to think more about the dynamics of multi-agent systems and moral epistemology. After that the event ended. The conference videos are available [here.](http://livestream.com/nyu-tv/ethicsofAI) In my opinion, the best talks were given by LeCun, Railton, Russell, Yudkowsky, the Tegmarks, Petersen, and Marcus. The event overall was great and being in Manhattan made it even better. There was quite a bit of valuable informal meeting and discussion between many of the speakers and attendees. There was no 'sneering' or disdain about Yudkowsky or Bostrom as far as I could tell. It seemed like a generally open minded yet well educated crowd. If you regret missing it, then you might like to head to the [Envision Conference](http://envision-conference.com/) this December.
r/aiethics
post
r/AIethics
2016-10-17
Z0FBQUFBQm9Delo1YkhFNHY5Y3ZtSm51OTlWLWs0RTdBdkVlRTdCSUFCQlZub0hGakJFaEJHVnZ0UHdIaTNGTVlzeDM2Y0FSVEd2enlvZFNqMWJTdnRQeF9yd0NUYTlYR2c9PQ==
Z0FBQUFBQm9DemFzUmF5anhaNnVQT3BseFA1WmFwQ3QzMm1WSTdfRUZVSEJoaGhueXpxLW1pTUVNY2ZOb1FDalR0ckhLcTJKbzc5NUp6VndKSzd6M3VCT3JSdHhoYXZ2NHd5QTU1NUY1aXpieXVaZFJHVzFPMXZiaDBoX3lZU1lsb3dEYk1pVlhiUXVQX3Z1RVl6aE40ZFpMVjJSSmJEWHBzdU1GVTFUSEI1U0xHZ1lQaEllUlEzUWo0VHY4UHVuNU5tN3o4cHZQQ1N6
We're implementing user flairs for people with experience studying or working in the fields of computer science and philosophy. Flairs are marked blue for philosophy and red for computer science, and shaded according to one's background. The text of the flair will describe your specialty - "machine learning" or "political philosophy" for instance. * Professional (dark flair): the user derives the bulk of their income through computer science or philosophy work, either as an instructor of some sort, a working professional (e.g. applied ethicist or data scientist), or through the publication of papers in reputable journals; * Graduate (medium flair): the user is enrolled in, or has completed, a graduate program in philosophy or computer science or a closely related field; * Undergrad (light flair): the user is enrolled in, or has completed, an undergraduate program in philosophy or computer science; There are no special rights or privileges for users with flair. If you would like a flair, state your area of specialty in the comments, or message the mods.
r/aiethics
post
r/AIethics
2016-12-23
Z0FBQUFBQm9Delo1WFowNWMzNEhjNDRBNjRLYU5oZFBKMlFhVnVYNlU3Y0FqOGdrUGptTlctb3F0OVV5SEVkQ1l1TFkyNWJUeVBHWW0zQjlpTmU4REtuc3RkbXZkQ0pFemc9PQ==
Z0FBQUFBQm9DemFzLWVtYlRBOXh2NmxMYWMxZmJodmhrRER0OE00RW04RmkzRm9say1BN3JDMmFocjAtalppcEY3clhZSUUzR3pXc0hnN1JTVUk5cEZBdHhfRTdRbDV3VkpaZ2NER2JQZUtOamJqb3ZQSXZFdXliR25BWVl2emE4ZVRUU0Q5TkJ4ZUNGWHZxX0dCb1A3bmVBMDJZaUt6elktekpRVDRZUEFWRTJHS2FtaU1sT0hvPQ==
Disclaimer: I’m not an AI researcher, just someone interested in the field. I’ve had an idea about how to align intelligent AI programs’ goals with our human goals: The problem is simple: Say you have some sort of software AI agent. The agent has a goal (or goals) mandated by its creator(s), and makes decisions on actions to take that would further the goal(s). Whether or not it can carry out those actions *immediately* or *directly* is irrelevant; for now, we’re focusing on the agent’s decisions, since its actions result from them. How do the creators capture the nuance and conditions of their goals, and communicate it to the agent effectively? My idea: don’t give the agent discrete “goals” at all, at least initially. Instead, give it a network of weighted values to *guide* its decisions. An example implementation could be created as follows: 1. Download a copy of [Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Database_download#Where_do_I_get_it.3F), in its entirety. 2. Find an article on something you like, or value highly (in my case, “[fun](https://en.wikipedia.org/wiki/Fun)”). 3. Apply a numerical value to the article. The value should correspond, as best as you can make it, to the ethical value you actually apply to what is described in the article. For example, I could value “fun” at 9.5/10. This tells the agent that, when it makes decisions, it should try to promote/encourage/further/increase the thing with a high value. This works in reverse, too: a low value (e.g., 2/10) would tell the agent to prevent/discourage/stop/decrease/eliminate the thing at hand. NOTE: The scale doesn’t really matter here; it could go from -10 to 10, -1 to 1, 0 to 10, 0 to 1, whatever works best for your particular setup. I’m using 0 to 10 in my example because it’s familiar (e.g., movie ratings). 4. Repeat steps 2-3 for multiple articles about things you find important to rate for the AI. All of these values are your “specified values”, and they cannot/should not be changed by the agent, at ***least*** not without user permission. 5. Specify to the agent that you are, in fact, done inputting specified values. 6. The AI agent analyzes the language used in all the articles in the database, paying special attention to links to other articles and the context of those links. It then uses these to calculate “generated values”, which have the same function as specified values ***except that*** the agent can change them whenever the user changes one or more specified values. This is similar to how Google’s [PageRank](https://en.wikipedia.org/wiki/PageRank#Description) system works, at the base level. Is this a thing out there? I found [this on MIRI’s website](https://intelligence.org/research-guide/#eight), and [this article has the same idea](http://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine), but I don’t know of any implementations of it. Other than that, I’ve found nothing quite like it. Thoughts?
r/aiethics
post
r/AIethics
2017-02-11
Z0FBQUFBQm9Delo1WUNPOFZZYkhFYlBjOVBhd1RzRmdIX0wxSHVlN0l0QzVPUVU4NV9pcmtMbVVqczZVbHc2Y0RJSW9JLVFpTUlzN08wMm1NY2szM3ZtdDhfcmdSRW1kRmc9PQ==
Z0FBQUFBQm9DemFzaWNhUmFuejBaY2xucWpmZGtBWHhYNXR4MlQwSk1VT2ZncEtVS1htYVFmNDFubmJPcW1EU0ozVWEzdDQ4b3p2cU1JNWRVTGVWRm9qT0tuTEZ4cHp1YUF5ZElZWkZjZDhCRVdFMGFtSDhSLTRXT0VqRlZpV3kwZDlySzIzTUxZX0VCcEVQUnI4Ul9rSU5WdGpSNDZLalVJQmU5Qk9ZZnI3MGVJeDBvWXJJV2pSYVpqY3kxY1lpbVViYkRyVFhMM0ZwNVJZbEdoemdDeFFiR2xhdDZibVlSUT09
I'm having a hard time figuring out which aspects of AI would be helpful for answering questions 10+ years down the line about AI systems being sentient. What exactly should one study? Cutting edge machine learning? Reinforcement learning?
r/aiethics
post
r/AIethics
2017-02-19
Z0FBQUFBQm9Delo1elZkRjlCekU4UUhYeUZkY3hNekJxVTFqaFBNR1dOUnR4Ry05SE5YN3RLUWZvX1FtUWI0d2NUak03WWYwV21kdEs5ZFNIbWRiUU9DRGExTWRvR3lpNGc9PQ==
Z0FBQUFBQm9DemFzYVdYYm9BMWZoUHNvM1MwNmI3eXhTU05SRTRfVkt0TlVCbkhnMGtwU0F4Vml0MXNKOEY2VUdIa2ZkWVpKZ3dVX2JOeDk1OW9mYnNFNU0yak4yZlZCUms1MS1vOFB3ZlJzZ25pSzRPZXFMLXhGdnVrazF3UGFSa05wSTBVWmdnelBtaVEtMFRRc1dKaHJmRC0xRTFVQ0NzNXNpSEVOYmd3S1NheFl3a2dNemJqOVV6SEtlNEx0b2lmd3U2bGN3YW9CLUtrYUNiZzVKYVpLamNWYU1MelZwQT09
Knowing some things about Buddhism, restricting some mostly religious teachings; do you think it could be beneficial into sentient robots knowing more about life, suffering, and human behavior in a logical and emotional way. Personally, if such a race of robot were to co-exist with us, the idea interests me as I wonder if an artificial being could seek a more enlightened state of being. Could it spell more of an understanding of consciousness and what is needed to truly be happy? I'd like to hear some opinions about it, and learn more about it.
r/aiethics
post
r/AIethics
2017-03-29
Z0FBQUFBQm9Delo1NWFPR0ZiQVZyRjVTYlR3Z1NxdmpaQ0lkQlg2STdHWm1mYkpja2JiaUZyekN4SDBQYm55OVE4R2RvRlZsaXRjNDlCdDY0MXVEUkpJX0kwZUVYMlBqY0E9PQ==
Z0FBQUFBQm9DemFzNzZTQmhfRHZ6MGNUV3A5N2pmR0lUSEpNQ0FPUDFqNUY1NTRXU1hZZmlMZHM5cm1iSW83TjBDV3RzVWhiOEJkdkhZYWFCdGVNMHphd0FZWWdZWmtsZGctY2wtV1d0RUtPYWgxTFpBQ2FuRjdGTWlFeS12YmEwR3QzMGx5Z0VJUDRzRnJMTHhEZnBDX3hMbFQxNkNRR3JmanpRd3NWZGU3RjJTblFyNERob2hXVnZianZDM1luOGRVLXZPamlaaXp3OGd3eDJteGFGM19mMFNnVUE3U01OUT09
I have spent a considerable amount of time considering AI and its affects on the economy. It seems a given that robotics well replace the vast majority of low skilled work in the next two decades and it seems to be a given that a lot of high skilled jobs having to do with pattern recognition and statistics will be eliminated as well. It is my hypithosis that a lot of high intellect jobs or shall we say specialized career fields will be made more generalized due to advances in AI's ability to handle large portions of complex tasks. What I am interested in is opinions on what this will mean for society. Does it mean larger groups of people will have an adequate ability to compete for an ever shrinking number of jobs? Does it mean only the cognitive elite will be in a position to work? And in either scenerio what happens to those who are left without prospects.
r/aiethics
post
r/AIethics
2017-04-18
Z0FBQUFBQm9Delo1WjNIdGdxV21wWG8taFJXT1dtQm10X3FqdlE2QUd1YXBDUHhRTnJ1TjFRczU1UERUQmd4cVJhc3MtUjFXaTc3cFFkV0pDdkpOaHIwa1BWX0wwYURjTkZDNnpVdVdGSDVXMFFyREhqa1VnR2c9
Z0FBQUFBQm9DemFzakhJd1dPbmM0QWxNemVkWlMyei1BWV9McUZ2VmFmN29jT0ViREFPMjlvRndpdlZabjBFalNtSnVWdl9ZMmtCenpqelR3b212Z09rUHZwQWhMRDBPSUh2UDBhWEFpTUw2OU5yRzNibmdCdkMtOHJtQzRvZ2FYelA4Y3RLV2V2UnJBWGZGMFRpSXo4NnRSSVU3Tm5GOTA1dUhsTXhfcmQ3emZhdXBVNXhuOEpObHJleDFXUHVzeVRhZW9CM1pjWXlP
Hi, I run a discord server dedicated to discussing philosophy. The member base ranges from absurdists, empiricists, nihilists, objectivists, platonists, egoists, anarchists, and anything in between. The point of the chat is to discuss ideas in good faith. People who come around posting woo and then refuse to discuss it, are not welcome. I hope I'm not breaking any rules of the subreddit by posting this as this is relevant to philosophy, and the format of a chat is so different from Reddit's forum style that they aren't in direct competition. Take a look if it sounds interesting: https://discord.gg/ueCUWdz
r/aiethics
post
r/AIethics
2017-06-30
Z0FBQUFBQm9Delo1S3I2dm1Ic01pMGZkckJ6ZjcyQlBFWDhzYy01MVdhTUhaQUswX2RHN2RZWWFnVk1SYWc4em9EUGJzM3k4QzFHc3I2M0FJSmpTOGlVVjRBeklRc1ZPMGJPZmRvOXJWOW50WnNQWVhZb3k4RXM9
Z0FBQUFBQm9DemFzSXJIcGd3RkNsMHB2X0FuM3hpNW5yUTVPQ1lHYnZabmJkeG5tWGo5ZG51cmJzZC00QW9CTDBuZXZJZjhFSGRuVnYzWUhMWk9DODNqM041YkZhUWNuYzRsUTlaSDlHMVN0TE1obTJldVZHNHd5SHhwaDNxQ2VaeHNYa2JCSDBsT28tT2twTzlDcW9HbnZUOUhnSjc4SG02aHp4eTJhcGc1b2VKOENXSmEwclpNSG8zZDhyUEstQ3psZWtXZ3NwUWJf
Hello! I am a student working on a design/research project on the topic of AI. What I need help with is outlining where there is a need that I can fill with this project. In simple terms, the way I have framed my problem for now is that as computer scientists are constantly working towards further AI development we need to find a way to mitigate the potential risks and ethical problems that could arise from an AGI/ASI. &nbsp; The types of solutions I have been coming up with so far (keep in mind this is a design project at heart, with heavy research involved) have been based around the idea of either an ethical watchdog group, an international consortium (similar to CERN or The Manhattan Project), or some sort of conference where people would come together to talk about these issues. My only problem is it seems like all of these things exist in some form already. &nbsp; So my questions to you would be: Can you think of some way that I could narrow down the problem I am trying to address to something more specific, so that it is easier to tackle? &nbsp; Can you think of anything that is currently needed that could help work towards solving the current issues with AI ethics? Is there a set of guidelines that needs to be made? Or some kind of metric or tracking website where we can see how AI development is progressing, what milestones we've passed, and what ethical issues we still need to solve? Or maybe there should be an educational ad campaign that shines a light on these developments/issues to the public? I'm just throwing random ideas out there but if any of you have insights into something else (or think any of my directions sound like they could work) please let me know! &nbsp; Your general thoughts on what needs to be addressed on this topic would be greatly appreciated! &nbsp; Thanks :)
r/aiethics
post
r/AIethics
2017-09-20
Z0FBQUFBQm9Delo1R0hkTmF6VE9uV0JDY0E1c3JESWowZXhPQlFxRERralo2TzMzb2s0OFMyLTZkQS01QmxRM0JCUFNmVHRvWlA2ejhPRXBDanZRWi1hZ2pSVUFGeG5fNkE9PQ==
Z0FBQUFBQm9DemFzUmtlc0lCOGxGOHdROEZZU2dWNVItUGE4LWltMEl1VFUyTk8tQWQ4N29PeFE2bW5haFAtbUx1TnFqdFlGcEphdl9IcGVpNjFpVk5QX3E5VFZadVF4QmZoMVp6emV0Z2pPMURocWRBcjU4YXk4eDFYVXpLSGw1UXVyRGNxQWQ2TEw1Qm5iVVFnSlZQdU1QZFU1dWpkSUp3SUFvQ0JncmU2M0VwaEV4RDdSZVlDNHZZZktjYlFLQWkzdzNCYWtnZTVa
(email sent to AAAI mailing list last week) AAAI/ACM Conference on AI, Ethics, and Society February 2-3, 2018 New Orleans, USA http://www.aies-conference.com/ As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics. In order to address these issues in a scientific context, AAAI and ACM have joined forces to start a new conference, the AAAI/ACM Conference on AI, Ethics, and Society. The first edition of this conference will be co-located with AAAI-18 on February 2-3, 2018 in New Orleans, USA. The program of the conference will include peer-reviewed paper presentations, invited talks, panels, and working sessions. The conference welcomes contributions on a broad set of topics, included the following ones: * Building ethical AI * Value alignment * Moral machine decision making * Trust and explanations in AI systems * Fairness and Transparency in AI systems * Ethical design and development of AI systems * AI for social good * Human-level AI * Controlling AI * Impact of AI on workforce * Societal impact of AI * AI and law Submitted papers should adopt a scientific approach to address any questions related to the above topics. Moreover, they should clearly establish the research contribution, its relevance, and its relation to prior research. All submissions must be made in the appropriate format, and within the specified length limit; details and a LaTeX template can be found at the conference web site. We solicit papers (pdf file) of up to 6 pages + 1 page for references (AAAI format), submitted through the Easychair system. We expect papers submitted by researchers of several disciplines (AI, computer science, philosophy, economics, law, and others). The program committee includes members that are experts in all the relevant areas, to ensure appropriate review of papers.
r/aiethics
post
r/AIethics
2017-09-26
Z0FBQUFBQm9Delo1clI0RkE3V0x4MWtwRGxsRjdTM3hocENKUzFqZE45YjE1Zlp0a1BBMjNBN1ZDanhHTzFWYlhYT2lkem1DYk05QkFkbm5IenM3eFNyaUt6QmxTd2hCQ3c9PQ==
Z0FBQUFBQm9DemFzRlhVdkkxSGVPejVyVXliSk4tbkZJMGRZR3pld2YyWVljalA3M0lhSGtJN2g1bmlVeEhzNXh4UWFkYUhQdkhjTThwaVNWajhoQVlsZ2hmOC11dXJnNWZsWFg1RHNNN2lpbGJBODMxN3RHTmdSNkxDYmhGck9ERUtxaEVEaTNvYXlMeHotQTZRTUlGOFVGVmlMS1dBWjFWRzh3b2JFQi1Xd3llTDcyUUtQWUJTeEtRUF9FLUZ1a0ROdkhGcGh5N0VCNW1MUHNzTjJwWXhiVFd2SXVuNjRPZz09
I'm looking for some machine ethics moral dilemma to have a rich discussion about the different philosophical approaches (utilitarian, deontological, virtue). I want to avoid the overly hyped autonomous vehicle/ trolley car problem, lethal war machines, and sentient AI. Do you have some explains of possible dilemmas with multiple angles? Thanks!
r/aiethics
post
r/AIethics
2017-11-21
Z0FBQUFBQm9Delo1ZDl1bHlLa2NWcERKZy1PNTBYNE1SMmlNSWw5SENaU3M3RU56ay1uVkxWVXhmbmxtN3pEMmtmcFp3clhrbm1TeXUzMDNyZ1NNci1xNE5GdWtSemdlc1E9PQ==
Z0FBQUFBQm9DemFzVk5kOG1wajlMYkNUaG1NbTVUdEtkdUlLQkZpS0RKSzNkdnJuQ29HQk90M0lqUWVEMl81TGw3dU1QLVJfLVNzdGJ3Wk1nd0ktdnNBWnA1ckY4aXpDZDczRnM1RjdRZkhubThmZFJGTjE5SDhFMjdsV1NhM1k4WHhSUkhfTHQzVm0tNlRjekJXRGxCbVl1NGF3R1RJMV83SW1kRGpZbVZFeHJONEM3VzltUzRDenlKTGR1bTNGRDZxRUFzRVhWLUZrZHBRWmdzdEd1bmxFdVJkNFo5cTFNdz09
https://soundcloud.com/21-bioethically-sound/002-new-age-neuroethics [seeking feedback and commentary on podcast] If neurotechnologies or mentally-enhancing substances become the Viagra of daily functioning and create new benchmarks for productivity, wakefulness, even emotional love, what's going to happen to the fabric of society, the character of our interactions with each another? Will these altered states be genuine reflections of a new-and-improved “me” or “we”, or some transient artificially-induced condition that wholly confounds what we inherently value?
r/aiethics
post
r/AIethics
2017-12-04
Z0FBQUFBQm9Delo1S2N3bkhQLVFqZVRDNGlSZEZqb2lXel84ajR2U1VFYkpad0dma2M4TEtpeVpSekJaTkNPdlBEYlJCMENxTU5qX2dvZk9RLWZXbmVxUzY4dEFTUUpyaU5peU1STUFWQlRRSkk4b1I2Y3pBWFU9
Z0FBQUFBQm9DemFzMXMzSUwteV9GeXdHRnpvWUtGdTVZaTVIMGtjekdZdklHYU9WZmtBTHVEeUtPYk1kZkhpYUQxdUVJQnVERVBrSjZ5ZnpjRFFqRnNZOU1fYnc1N3BWUG1aRXFPdkM0Wms2QUY4bkZ0YzBFU2pDWUhwaXB0SFpLcVdYQXBPTzBKZ0k2QS1kLWRQQlhuOGt3UGlnbWJxaUpiSVBiWXpDODZySzdhV0hZeTdYeERoeVpwVkU4V1RFYkF1cDRvWW0yUE15
Hey everyone! I'm a graphic design student but I'd love for your thoughts on [my capstone project](https://www.behance.net/gallery/59498375/The-Future-Institute). &nbsp; We had open range for topics on this project, so I chose to address the issue of ethics of safety in the development of AI. My solution was to create a research institute that would focus specifically on this, but in a more collaborative way. Rather than utilizing the expected "techy" graphic styles, I opted for classical type and illustration to help connect to more philosophical thinking and ethics. This is contrasted with the bright and modern colour palette. &nbsp; Please let me know what you think, I'd love your input! Thanks :)
r/aiethics
post
r/AIethics
2018-01-06
Z0FBQUFBQm9Delo1RDlqU0xTUHQyTmZkQ3Zvek4zMDJ0WjRWU21FdndXYkxsak1CYlZxZFIxUU9hb1NaYUYwOG0tX0E5N1FGeTdFZE1nbzNXeUQ1YnZUM0lYVWxmN1M4ZHc9PQ==
Z0FBQUFBQm9DemFzV0IyTUc1TnM3TXZ6RzlzYjV6cVg0WTduMGx1bWhJQzFld1hOOXk0ekN5aW5WYVBRSUdUZmxLS0NEMm5ERHduU3U2MjcyNFJUZW1pT0ZQMDByTFBqXzBBUDhYbFczUnhfWVZBRzZ2R2cwWjVKNW5RNERlQ1lWMWVrdTVlTjRsVTVUMjhuX2ZSSl93bHNsLUpKX09CQXNxd1lDQmtPQTVhQlBfYU1SaElYckxzSS1qenQxcDMyNGlWT2laUUtwYk5N
Alright, listen up faggots. It’s come to my attention recently that some of you don’t know jack shit about options. If I wasn’t already terminally autistic, some of the comments I’ve read in the sub might have made me go full retard. With that said, my friend Jack Daniels and I have taken it upon ourselves to get you motherfuckers #LEARNT on some god damn options. While I have little faith that most of you will truly understand the intimate innerworkings and dynamics of derivatives, I have no doubt that a large majority of you will take one or two small pieces of information away from this. The goal here is to get you to the point where you can start overestimating your abilities again, like a good boy should, instead of blind dick swinging, like most of you are currently doing. *Disclaimer: I’m going to skip all the boring, possibly foundationally necessary academics behind where the Greeks come from (inb4 Greece), Black, Scholes, and Merton’s research, Ito’s Lemma, and all that jazz. If you want to look it up on your own time, read a fucking book. Hull’s book on derivatives is basically like the bible for this shit.* *Credibility: I’m a financial analyst in the risk department of a large insurance company, and work with our hedging portfolio on a daily basis. I also have a Bloomberg terminal that I like to aggressively use so that everyone thinks I know what I’m doing.* --- #Background There are only 4 Greeks that you really need to know to trade equity options: 1. Delta 2. Gamma 3. Theta 4. Vega If you have at least a modest understanding of these, you’ll be on your way to sweet, sweet tendies in no time. Now onto the gREEEE^EEEEEE^EEks --- ##Delta Delta is the grand-daddy of them all. The Hugh Heffner of the Greeks. Most of you probably are familiar with delta, because it’s the easiest one. Easier than your sister, which is really saying something. **Delta represents the relative increase in the price of an option, given an increase in the price of the underlying.** When you buy or sell an option, the price change doesn’t exactly mirror the stock 1:1. Options expire at some point in the future. Stocks don’t expire. The implication here is that an option is only valuable if you can exercise it for a profit. Logically, this means that deep ITM options will have a delta pretty close to +/-1 (depending on whether it’s a call or a put), while deep OTM options will have a delta pretty close to 0 (or 100/0, whatever convention you use, the only difference is where the decimal is). **Note:** Option deltas range from -1 to 1 (or -100 to 100 deltas). Calls have positive delta (0 to 1) while Puts have negative delta (-1 to 0). If you’re seeing deltas on your trading platform that are not in this range, you’re probably seeing Dollar Delta, which is just: Delta x Notional Shares (usually 100 per lot) x Price of Underlying **Autist’s interpretation:** The easiest way to wrap your autistic brains around this is to think of delta as *roughly* the probability of the underlying stock price going beyond your strike at expiration. For example, an ATM call has around 50 deltas. That means you can intuitively view it as having a 50/50 chance of expiring in the money. An increase in the stock price would give you even greater chances, hence the delta of a slightly ITM call is a little over 50, and deep ITM calls are close to 100 deltas. An ATM Put has roughly -50 deltas. This doesn’t mean a -50% chance of expiring ITM you fucking idiot, it just means that your option value is negatively correlated to price increases. --- ##Gamma Gamma is the least-hyped Greek out of all of them, but definitely one that could cause your portfolio to turn into a shitshow while you’re not paying attention. **Gamma represents the change in Delta, given a change in the underlying price.** Gamma is the 2nd order mathematical derivative of price. It tells you how fast your delta will change when price moves happen. Just like speed and acceleration. The second one tells you the rate of change of the first. It can also be interpreted as a measure of convexity, telling you how flat or round something is. Like your flat-chested girlfriend has almost no titty gamma, while Kate Upton titties got gamma for days. Gamma is always positive, and is always largest ATM. **Autist’s interpretation:** Think of gamma as the big swing when options go from being OTM to ITM or vice versa. So the next time you see that piece of shit stock hitting all time highs, think to yourself “Holy shit, this dumpster fire might actually moon, better YOLO on some calls real quick”, then it drops by $0.05 and your calls drop 50%, blame it on the gamma. --- ##Theta Theta is the turtle of the greeks. Doesn’t move too fast, doesn’t do too much when you poke it with a stick, boring as fuck. But this is where the time value of options comes from, so it’s important that you know what it is. **Theta is the change in option price, given a 1 day change in time**. Short option positions have positive theta. Long options positions have negative theta. This means that the marketable value of the option decays each day it comes closer to the expiration date. Less time to expiry = less time to moon, which means people will pay less for it. This is essentially how options selling strategies make their profits. They bet that the price won’t move that much, and most of the time, they’re actually right, because dumb cucks like you are willing to pay those prices. Like gamma, theta is also the largest when an option is ATM. As time passes, theta becomes larger and larger. The implication here being that the last week of an option’s life, theta will be exponentially larger. **Autist’s interpretation:** Think of theta as the shot clock. It keeps ticking away, no matter if the game is exciting or boring. If it’s a really close game (i.e. the option is ATM), then the shot clock is pretty much the make or break thing for you. If the game is a blowout (option is OTM) then it doesn’t really matter that much. When it comes down to the final minute, and it’s make-it-or-break-it for your shitty, shitty, poorly thought out March Madness bracket selections, you’re literally ripping your hair out because you’re on the emotions express, screaming “WHAT THE FUCK WAS THAT, REF? ARE YOU FUCKING BLIND?” and then cry and piss yourself in the corner. That’s the only time theta really matters. --- ##Vega Possibly one of the most misunderstood Greeks, and 105% of the reason behind why RH faggots try to get their trades reversed. **Vega is the change in price of an option for a 1pt increase in the implied volatility of the underlying**. Now, some of you faggots may know what implied volatility (IV) is, others think you do. No one actually does, because it’s a fucking made up concept in order to get the math to work. The short bus explanation is that implied volatility tells you how much people buying and selling options think that the underlying price has the potential to move in either direction before expiration. I’m not going to go into how it’s backed out of the Black-Scholes pricing model, or how implied volatility actually represents an estimated annualized 1 standard deviation (68.27%) interval assuming a gaussian distribution of continuous time price movements (specifically addressed to all of you elitist NERDS out there, cash me in the comments, howbow dah?). Implied volatility is the only unobservable and incalculable input to an option’s price. It’s literally made up. Historically, it hangs out somewhere between 5-10% above historical realized volatility, but when or why it jumps or drops is purely based on the dumb cucks who are trading the options. The important distinction here is that **Implied Volatility tells you whether an option is relatively expensive or relatively cheap. Vega does not.** Vega just tells you how sensitive an option’s price is to changes in the will of the people. Both calls and puts have positive vega. Intuitively, this means that when people think the market will move sharply in either direction, options increase in value, because people want protection (or phat gainz). **Autist’s interpretation:** Vega tells you how much you’re fucked when people lose interest in a hot meme stock after it doesn’t moon, or when people unwad their fucking panties after some good ‘ol Thursday action. --- #In Conclusion Hopefully you retards made it this far without wandering off to try and hump a doorknob. If so, congratulations, I hereby award you 10 good boy points. If there’s enough interest, and I can find more whiskey, I might do a part 2 on basic options strategies and how to completely misapply them. 𝒩𝑜𝓌 𝑔𝑜 𝑔𝑒𝓉 𝓉𝒽𝑜𝓈𝑒 𝓉𝑒𝓃𝒹𝒾𝑒𝓈, 𝓎𝑜𝓊 𝑔𝓇𝑒𝑒𝒹𝓎 𝓁𝒾𝓉𝓉𝓁𝑒 𝑔𝒶𝓎 𝒷𝑜𝓎𝓈. **Edit:** Thanks for gold, assholes. Feels like being captain of the short bus for a day.
r/wallstreetbets
post
r/wallstreetbets
2018-03-29
Z0FBQUFBQm9Delo1ejMxS1ViNkxUbmRlai1UN2Y0XzE4ZmUtYkdMTElwdThxT1U3M09FNWwtYzdSR1c5dUotUzltMlpIdGx5ZkpqN0QtSGtUV1NmZTd4MTZCR1RXUVBNOTZuSm45TGdidE52WXBzeGl0d0dWMEE9
Z0FBQUFBQm9DemFzdDFWbkw4ektOSlM3Q1VHSk13OXVLcy1CSGk2S3MyQ2F1ODBoZlEwNWtjc0J3bWdIRTNVdWtQNzQtTURaRnl6Q0tJR3ZidUx6Sm9MeW1hdFZiUUFwMTVYSDFzYWd4TTJBWl9LdE5jMFBKdVZBWnBaRUFQQUg1R1Z0eDBqLV9mRkR5Z19UazQwX2szSnJfMFNSTEJZSXV5UUdfcDUzOUFySENRQ0VWRXdlMzhFPQ==
When a private organization develops a machine (whether driver-less car or genuine AI) that requires ethical stipulations to work in society, and they do not ask society’s input, they establish themselves as a dangerous authority. Society at large already determines right and wrong; this should extend to the machines that will only come to have a greater and greater impact on our lives. We need to open source machine ethics. The trick is overcoming the original problem: those with technical expertise making ethical decisions for others without that know-how. The collaborative interface needs to be relatively easy or many won’t bother learning to use it. It needs to be decentralized, human readable, censorship-resistant. A place to start might be a Wiki made up of the ethics, axioms and “common sense” of society but written in a fourth generation programming language very close to human semantics. Today most people generally consider Wikipedia to be a solid approximation of the truth; if we could have that level of collaboration for a machine-readable code of majority-agreed ethical tenets I think we might avoid the power differential that automation (and beyond) represents, preventing serious ethical risk for our species.
r/aiethics
post
r/AIethics
2018-04-06
Z0FBQUFBQm9Delo1Yk1oTTdRcklfWEdPRWIxemtnSWJuMHB4dmpXRTFoMFVmVXFsTTZ3NzIxSXpidjZzQUZUX09KQ0lrcFNRZ0o2X1ctQVEzLVQ2YzFJc1J4MlA0M3RtVnc9PQ==
Z0FBQUFBQm9DemFzbXVSd3NUd195TlZ0NGx1aXhjMHkweE5oenprNlplMkJJN0lRQkYxWnNDZmlxWUF5OXoxVGQwMFBuamRhM244Z05IUTdNTEtZQktaY1ctM3lxaGlBek90YTdINU82cXhZMHNmeE03eWFFSnpOTy1uMlI3X2lpLU1kWVB3eEY0LUE3S0NGRldEdnpzSnU1VWRYWGhJVnBDV2VZUGh0bGpMTzlReFFtN1BWeHBhZ1Mta3gyYVlRVTUtVmxCcTViUzgtQmlUMlRrU1NvRC1uYXZRbnEtOFZ3UT09
Yeah, so it's not happening only in China. https://www.theverge.com/platform/amp/2018/5/22/17379968/amazon-rekognition-facial-recognition-surveillance-aclu Should it be framed as individual freedom vs society safety? What did we learn from "human surveillance" that we should be applying to "machine surveillance"?
r/aiethics
post
r/AIethics
2018-05-22
Z0FBQUFBQm9Delo1RTVZeXU2RXdqcG1uZHVhOEVkSF82OFZCcnlUUnltTWd2Nm9rTWFhVnZlMkZXeXVsUXR2UGJFaUIyYzlPb1Rrc3AwakFBVjdsODlnT2RkUXN3VVRSbGc9PQ==
Z0FBQUFBQm9DemFzY3NWNU5OTWdQQWpuUUctdFMyT3JCYXlxclNrQUVTOXNRdF9OYVFnUzZ6WGh4S3ZYaVRuN2dqSWRTR3FGNWl1bGxCRzBqVkRYWHl0XzVuNzh6MUwyZTNnM25yVmY4REFyU1p3ajAxMjdvVW10QWVEYjdTamtMSldXbnU1elNjaE5GVlE5VDkzdW80cVFGS0J0RTFfamJKTFZZYWlxODZDR2xzTi1ZRk56dU1ZPQ==
Hello everyone, I am a law student and in couple of months I will start to write my optional mid-studies thesis. I would like to tackle problems in the field of AI from legal point of view, however I am suffering from general lack of thesis. I am mostly looking for problems related to intellectual property, if not then secondly related to legal responsibility of AI (but no self-driving cars, it has been beaten to death already) and lastly about application of AI in judicial proceedings. However I am opened to any suggestions you might have (facial recognition, **ethical problems...***). Anything that you feel that this field faces or might face in near future from legal point of view. Did you already ran into some related problems? Feel free to make it as technical as possible, for what it's worth (honestly not much) since I am interested in this "intersection" I am familiar with some of the underlying concepts, can code in python and took several AP courses in maths and stats, so bring it on bois. Thank you in advance for any suggestions or tips you might have for me, it would really help my studies. Edit: I don't want to end up on /r/choosingbeggars but preferably it should be something "new" (hence the slefdriving cars) and preferably some really specific problems that I could form my thesis around. But of course any insight will be appreciated. And yes, you will get credit in the thesis.
r/aiethics
post
r/AIethics
2018-06-22
Z0FBQUFBQm9Delo1WTNxN2JrZWhvLXd5UTdLRjhEUmdsNDNkU2VwaE9sRF9PeVRDWEhNVmxyMGVvZGpWdGdzZ2thbGY0Mk9saW85ZTFUdm9Qbl9CRElvVUsyT2NPeHJObkVZYU9QNGVheG14V1UwSks2eEJzZDg9
Z0FBQUFBQm9DemFzdXQwbnZZWjhFcXdROWFyWmc5TFpXVmkzN3k1Qm5scXRtblhDRFFETXBnLWN3ZS1SekludjRBa2wxMlZoT3ZpdUNEUkozNzlOYzBhUlR0VC1xWmZ5M1AzVnhyNkhEVEI1UHJVRmhBaTRzUVZ0d3NzNm4zTEZibEVMMUFVQ0Y3cG84M3VoMUdmbmNZcHNnaDN0WXNLa09zcXRUNko4d1VSRlR0SVlCNFVLVWpjS2JFVmNvTTJFNS1sYkExeHNnVHJXUW0xSUpZYWotZFp4cHhqWmZZanRrZz09
The [EthicsNet Guardians' Challenge](https://singularityu.us4.list-manage.com/track/click?u=cf8d60100fb6d439c559221f0&id=33110ba26b&e=4f95f866cf) is live!  We're asking the public for help on how we can best to teach machines about kindness, in creating a dataset of pro-social behaviours. This could be an important step on the road to AI safety, making various proposed algorithms trainable and deployable. There is a pot of $10,000 in prizes available. We would be very grateful indeed for your ideas, and if you could please help to spread the word for us. It has been a long journey just getting to this point. Thank you so much!
r/aiethics
post
r/AIethics
2018-07-05
Z0FBQUFBQm9Delo1Nkc3eUxZSTNKLTZNdmFueGhFZ1RnTU1aVmFYZGpuQ3ZTZWd1MUpDbUt2RU1USzk0OEc2N2JqZDQyMXAyNWNIWE45cWRVN1BObUlUSHNYbXVkYldWY1E9PQ==
Z0FBQUFBQm9DemFzMk15eWd3VWwtRzUwYnVyYTVxSTVBWkZsLWgxWWh0WFBGMmljcXNjQlNjR1JYUkU5VFZiOWJiNEg0QjFXTFVEYkpmbXNacEFITU9NWWJzak9YY2hwMVpHNmZWRTJ6ejBVWkVENEJ1dGlDbUVRSHZUdVg2TFItNHRyXzJaZ2ZDR2tWOW1RM1VFN3pzMWttVmtqZmdOUXR0ekpGOXJGczZhYzlHN2MxQVBmMm5BM0JlV05oRWwyRFBudUtMM01zOFBfWUNEMWhFaTNxcUxmMDJfY0dxQTlTZz09
Hi there, we are a group of post graduate researchers at the Royal College of Art. Our research topic considers the role in which Artificial Intelligence is affecting the Legal profession. Some of our speculative research questions are as followed: &#x200B; * *With the use of the Internet and technological proficiency growing exponentially, the platform for ‘Cybercrime’ increases also. How do we presently define Cybercrime and where do we see its direction heading in the future? With increasing calls for a neutral net, how should we regulate these offences while maintaining a free and open internet?* &#x200B; * *New original forms of crime are being facilitated by the Internet; recent cases of ‘SWATTING’ and ‘DDoS’ attacks have showcased this. Is the current legal system able to adapt and enforce accurate justice against these new crimes, or should an alternative judiciary be considered?* &#x200B; * *Can you envision a future where an AI system has entirely automated the Legal Profession? Can a machine learning program take on the roll of the ‘Judge, Jury and Executioner’ ? What happens to the idea of empathy and compassion in this future?* &#x200B; * *It is often claimed that our Judiciary doesn’t reflect the diverse society in which it serves. Is there a potential growing disparity between the way in which communication and information is shared over the Internet and the typical demographic which sentences it? Is internet culture completely/accurately understood at the highest level?* If anybody, can offer an insight into any of the above questions, we would greatly appreciate this! We would also be super stoked to have a conversation with somebody with a background in this world too/ &#x200B; B, D & K
r/aiethics
post
r/AIethics
2018-10-29
Z0FBQUFBQm9Delo1WF80Ni1aM200cnJRaXNvdnNVNk5zMFlrNDk4M1lhX3J0bFhhSTJzaVNZblUxU2htTVJJWVVuckRwYUNtSXBJUVIwUG1Zc1oxaXcwanNTLXB2SHFhSkpmV1lMREcxMUFwd2ZnVzVtOWxKRFE9
Z0FBQUFBQm9DemFzcjJvYVlOeUZYVlREaUhUeGstM1hJMzRJWkFQVldDNHhOTlFXQjY3UWRCbkIxNEZ6N2UtY2tPZzlnSXVNZFdFMW56bXNUMkhTdklhdHZiM0hobHdWb1ZoZ1FwWlFBQXdNWm1fVmJQV25oc3lrUFNFWWZoX1pLNU5jdkFjM1FUMzlWdTZSaHFPZFRlWC1TZ1hRR3pJUDZEcE9UaGs2WVpFSm1QbGxGSVJzV2w5cmNYcy1BaTFkbVFlU1I5ZzZRbmFF
I bought this book a month ago out of curiosity and just started reading trough the second part, the law of narcissism. This is my first Greene book i have read, so far the impression of it is just "okay" Trough the first chapters, i do find it interesting, and agree with most of what are written in describing emotions and the source of our self-worth. However, so far i am disappointed by the lack of of footnotes, citation or any sources from his claims which sounds like it comes from a scientific study, which honestly making the book feel alot less reliable even though if it is actually true. For example, in the mastering your emotions chapters, he talks about how childhood trauma or general negative experience in childhood echoes trough adulthood, and when confronted, adults who experienced bad childhood re-enact or relate those traumas to deal with their current problems which cloud their rational judgement upon taking actions. It would be alot better if he provide actual sources to these claims, even if the argument make sense. It doesnt do a good job telling which are opinion he constructed and which are facts from actual credible source, which is ironic for someone who suggest to be as skeptical as you can to exclude the basics of scientific writing wholly. Compared to non fiction books i have read lately provide footnotes or a tag you can refer in the notes section in the end of the books, before index. On a more positive note, i do enjoy the historical example he picks to describe each laws, etc, but so i far i take it with a grain of salt more than usual.
r/books
post
r/books
2018-11-13
Z0FBQUFBQm9Delo1YklVWHJoV2Q5bzh2VE95NHl3QS1ta0xfM0x1aXBCcGJEczYzNy1raVhqcDkzNTZRcVdRbjQ3eE1fUlF5N0EwVVAtY2lwVFNuRFpKcUZmWWFmN0Z2Rnc9PQ==
Z0FBQUFBQm9DemFzNnZZR0lEb1VSdnlaVFZjbHp2dDFYbUpWOXNtZkhLbUZMRzl6eDg0T0ZDY21GRWFtSGM3ckhaYmJ1aHgtb29Zd1ZLR3dJQXY1QWhYM1NaRzNzekd4MDR3RVgyM0FsOXFmR3RhRWt5UkpkQi1XNEgxTEs4VjhHQkNZOVlwdjQ5OUV6VW85Y2Q0UldpQ3NHOUU2R0w2Vk9VSzkwYXV5MnliZ1VuR0FWUG5pQ0NJV2tVS1JvZW5YQjBUREJMRGwyckwz
If we were to create a true AGI, would it be able decide what it wants to do? Could it evolve past whatever limits we place on it? If the AGI had a processor similar to our neocortex, would it susceptible to all the problems that humans have? These are large questions. If you have resources to check up on, I would be happy to look through them.
r/aiethics
post
r/AIethics
2018-12-22
Z0FBQUFBQm9Delo1aElTU09DNlFkU2kyU3lveVh4bkhqQmNvOWlIY2swcjU3Nk5ERGdvSGNqRDlSMEFPbVc0cFNvZGxhbUQxcXI3ZERIc003bGtyUWZLQk10U2kyV3B4NFE9PQ==
Z0FBQUFBQm9DemFzbUZzMjZXV0Uyd0JmSjhsb2M5TTNpQUlMZUstSVFtWGpJa1VDcTJYcUhleWZLcW5VaVJPTXJudW42MzRHaUNYM0xhMjQ0cnkyeVZiOFZNaTE4dnhPVHQtV2VBYzh0ckswa2RxbEhTbHdMbmlqTlN5RzRzSzhPbjlkT1VLeWlsYjVsV25IdWFfWDNzUWlHMUd6S0pkSC1LM1lHMnByLUlfRExIYVpTdk1odHRFPQ==
I think most of our problems with this come from trying to limit the conceivable lessons and derivative understanding of AI systems to a simple series of rules for behavior. And although it's easy, the rules we set for the AI's are crutches for that learning that we should be understandably wary of. Having a rule like, "Don't hurt people," would rob the AI of learning from the consequences of the action and the reasons why it isn't good. Obviously while it's undergoing this kind of early stage learning it will be important to make sure it isn't yet capable of doing serious damage. But it's important that it's able to synthesize the consequences of the world around it into it's own understanding - otherwise its behavior could have large gaps in terms of what we would consider normal. Do you feel that this form of learning is too dangerous or only that it's too hard to do?
r/aiethics
post
r/AIethics
2018-12-29
Z0FBQUFBQm9Delo1QWVBSnhzM0NSSTBzMmc2aWdSRkM2NkJpekVsNnBvNW82RlBJZkRzX29kVlhXNExVZVdlOGJrTjR1a0YyODRzc1BqQ2lTb3V4ZWN3NGxiclF0Vk43Unc9PQ==
Z0FBQUFBQm9DemFzT0kzbG9qOVlxNXpFM3NUek03TXd0TzdtNUdwOTlvMkpQZ0dGRjBxR3ZjQVppRXZaZFBxaTdLcUhwSHctajBUdTNnRXpoYXo5Q01BaVdNQXVMLXp6MUs2UTZLN1A1UWZXczBuTzFoaGJKQ1pEWTh6aTM1RHBkeW93cjAwMG8wcE1VZG4teGNoNGdYc2NqYjBRT1JiSXpiLXpBS2NJc1kxMnlYNHhKRWxuMlZFTE4tUnh3aVFfZm9tak1WcHlFQXdH
I'm completely flabbergasted about the hype surrounding "ethical AI" and encourage anyone to convince me else wise. Either the entire discussion surrounding AI ethics is by people who are incredibly innocent and lacking of street sense, or there's something I've completely missed. I thought I'd make this post to spell something out: AI will be a tool. Nothing more than that. It's a simple algorithm of gradient descent, reward mapping, or whatever other interesting technique comes into fruition in the next 100 years. Here is the revelation for everyone: The ethics part of AI has nothing to do with AI, it has to do with the humans behind it. This is the same argument that you can't blame guns, only the shooters. Guns don't kill people. Humans do. Before this degenerates into a bipartisan argument I'd like to state a few observations: 1) We don't attempt to program ethics into nuclear weapons. Rather we hope the humans that control them are ethical, and our socio-political policy is conducted in a manner that controls the humans that have access to nuclear weapons, not how the nuclear weapons operate themselves. Attempting to program ethics into AI as opposed to the people that design the AI is equally as ridiculous. 2) No matter how many "make believe" rules or transhumanist mind-masturbation principles you program into a superintelligence, all it will take is one rogue organization, country or terrorist organization to implement basic simple AI algorithms that weren't programmed with those rules in a server farm of GPUs, TPUs, or whatever the flavorful hardware of the future may be. 3) This post has nothing to do with the ethics of how humans can program an AI. Of course this is a valid point of public discussion and policy: Ethical humans absolutely should ensure that any AI they program for any purpose that may effect other humans should behave in an ethical manner. Rather, the point of this post is surrounding the laughable optimism that some people seem to have surrounding an "ethical singularity". It's absolute common sense that any form of ethical singularity would be more complex than a non-ethical singularity. The simpler things always win. And if it doesn't initially, eventually it will by rogue people/entities. I shouldn't need to elaborate on that truth any further. I had to make this post after seeing the trend of "how to ensure superintelligence aligns with human morals" absolutely everywhere and somehow merging itself with serious discussion of how humans can program AIs they have control over for ethical purposes (eg: making sure a self-driving car behaves ethically). If it isn't obvious to anyone reading this: A true GAI that has the capability of being smarter than us and having free thought wouldn't give a damn about our ethics, and any attempt by us to artificially program it to do so could easily be bypassed by any terrorist, rogue military or perhaps even non-rogue military organization at some point in the future. You cannot stop that anymore than you can stop a terrorist attack occurring sometime in the future. It is inevitable. I'm genuinely at a loss regarding how so many people are even bringing this type of discussion up at all? Programming 'ethics' into any form of superintelligence is a completely ridiculous concept for the reasons I've stated.
r/aiethics
post
r/AIethics
2019-05-03
Z0FBQUFBQm9Delo1NmkyblpSS2xReVhPTGJ3bFVFRkNaekY5bHMteXZPR2RmdUMwTGJMRE5YdjFlOU83NGFYZmtGUnlJT09tNm9SUUV1NXBVRW82bng3ekxyeEZpUGhINFE9PQ==
Z0FBQUFBQm9DemFzV3RCWDJLREw2UU5kdVdMTXJmeFJwV1NDXzFOM25BN3NibjhYVHduZTFaNUlJeVhMQ0EtMElPUERuQURvZmZfUGVaTDgzY0lpWWJnNDc2d3JhTHU3aFNYX1VkWWhkOW1xckQ4ZThQNFpuazBqQW1naE1BM3MwNmZQS0xfaGF6VzZnaG15RmQ2NzRERFZuRXgxR1Q1THVuNGdFclJjTVJEM0M0cmtTSWQwOUNZZXA2RVBjNk1aMC0wQldmWUtOTmxO
Looks like an interesting internship opportunity for folks to get their feet wet with the field of AI ethics [https://montrealethics.ai/srip](https://montrealethics.ai/srip)
r/aiethics
post
r/AIethics
2019-06-17
Z0FBQUFBQm9Delo1RHpHWUpxZ2U2VndsVHhranZQWFdIeUV0R2NGb2F0TVpiYTFSTjZFcFBORmdCRl8wN2tmNXNlUmJFOE11UWpJWTlDVkRNb0lMd05SUWdTcDVOMldoR2c9PQ==
Z0FBQUFBQm9DemFzYTBZUzdPQkdYcmlzb2NOaHMtZU14dVFYR2Zfa2E1RERlVFdsbU9KOTM3UGFiZUhpNnFYN1lZc0lDV0Vmd2ZBNmR3cEl5LXlOZGNmU3ZxLTFtM2VkM2xyOGtrVVZjTzFzczZjS29tNXRLSTdaLU91ZHBINmxXMkFqTFVFWUc1OXpJWFZhZ3NmWXZKRUJQZDFDeFdwS3pZdlBnRWxXUElRLUVNbzNzWXY0REh3WHVxUjc5U1VFeXZQU1l6cWQ2X3dB
I've recently published a paper ([https://doi.org/10.31235/osf.io/vapje](https://doi.org/10.31235/osf.io/vapje)). The information you would need can be found in the abstract...and in the paper itself. However, here is a brief description of it: "*The SBST compresses human motivation down into a simple mathematical system that implies strategies for manipulation and comprehension of another person's motivations by modifying the elements in the proposed system. As such, the SBST will have profound implications for managers, marketers, psychologists, and possibly AI developers.* " &#x200B; By turning human motivation into a mathematical system it allows for serious (and specifically-targeted) kinds of manipulation of the populous, as expressed in the strategies section of the paper. However, it also means that a human-like AI can be created with the SBST as its foundation (since it turns human motivations into a mathematical system). I look at this in greater detail in the human-like AI section but that brief description above should give you the gist of what it means. &#x200B; I am, by no means, an expert on AI but I fear that there could be drastic effects on the field of AI development. One being the development of AI to mirror human motivation with this mathematical system. The second being the development of AI to enact these manipulative strategies against consumers. &#x200B; There are already uses of AI in business for things like content curation and ad targeting, but this gives AI developers a means to directly target a person's motivations with tested strategies. Once an algorithm like this is perfected, it can model a person's decision making process but not in a "black box" manner like deep learning algorithms but in a way that is accessible to the AI developers and any one else who wants to see it. So, I come to you asking this: "what should I know about this topic to better handle its implications for AI development and AI ethic, and how can I minimize the damages of its implications while still promoting the paper?"
r/aiethics
post
r/AIethics
2019-07-10
Z0FBQUFBQm9Delo1UjQzM29FR21zajJBWVE1Y1VkaUVFMnFzQnVtNVBLeHB4RWg4WFNnV2pmenVRa2hDaDFSNTNUTV80R0dwTDhIWHpRTEFQdjc5Um82SGU5dHg3TnlNRWFYOHJWa01qODFkdzNiX1YyLXIwRzg9
Z0FBQUFBQm9DemFzVDJ1TVR2RFotUUltdzNQaEV2Vks4cU9RdlJDSWEtREgxSlpGcVpLU0tRQVZDdEp3dEMtT2h1U2hMaWlxNW8zRXFISnZRODlJb3R0Y0pWSUtITF83M1hGajVwQjF5aks5X1EwSUlwQWtmT284LXVnckpaWTJ1WUN2S3h5R0Z6cUNSZERSNHhtb2tuaXN4Qy10TzVCSGcydXhXY18wSnBvOVI5TEF6eFlfQy1NPQ==
**What is** r/BitcoinCash **?** The [r/BitcoinCash](https://www.reddit.com/r/bitcoincash/) subreddit is a forum dedicated to discussing the cryptocurrency Bitcoin Cash (BCH). The aim of this subreddit is to cultivate a space for constructive discussion about Bitcoin Cash. Intentionally disruptive behaviour and heavily off-topic discussion will be moderated accordingly. Please refer to the sidebar for the subreddit rules. &#x200B; **What is Bitcoin Cash?** Bitcoin Cash is a peer-to-peer electronic cash system. It's a permissionless, decentralised cryptocurrency that requires no trusted third parties and no central bank. With Bitcoin Cash you can safely and securely send money anywhere in the world, nearly for free. For more information about Bitcoin Cash, please visit[ bitcoincash.org](https://www.bitcoincash.org/). &#x200B; **Is Bitcoin Cash different from “Bitcoin”?** Yes! In 2017, the Bitcoin project and its community split into two. Perhaps the least controversial way to refer to each side is simply by their respective ticker symbols, BTC and BCH. While exchanges commonly refer to BTC as simply “Bitcoin”, Bitcoin Cash, usually represented by the BCH ticker symbol, is considered by its supporters to be a legitimate continuation of the Bitcoin project, and the version with the best chance of creating a globally adopted peer-to-peer electronic cash system. &#x200B; **Why was it necessary to create Bitcoin Cash?** The legacy Bitcoin code had a maximum limit of 1MB of data per block, or about 4 transactions per second. There was also a common sentiment among Bitcoin Core developers that non-backwards compatible upgrades, commonly known as “hard forks”, should be avoided at all cost. This mindset severely limited the potential to introduce beneficial changes to Bitcoin, which were needed to prepare the protocol for mass adoption. Although technically simple, the Bitcoin community could not reach a consensus on raising the block size limit, even after years of debate. In 2017, capacity hit the 1MB-imposed wall, fees skyrocketed, and Bitcoin became unreliable, with some users unable to get their transactions confirmed even after days of waiting. An average transaction fee of $50 took place in December 2017. As a result, Bitcoin stopped growing, and companies such as Steam and Microsoft began *dropping* Bitcoin, because it was no longer a cheap and reliable payment method. In August 2017, a subset of the Bitcoin community decided to move forward with a proposed protocol upgrade, forking Bitcoin, and creating Bitcoin Cash by lifting the block size limit as a step towards massive on-chain scaling. There is now ample capacity for everyone's transactions on the Bitcoin Cash blockchain; low fees and fast confirmations are standard, and the network has been allowed to grow again. &#x200B; **Isn’t** r/btc **“the Bitcoin Cash subreddit”?** It is worth noting that the r/btc subreddit came into use before Bitcoin Cash existed. It was originally created as a forum for open discussion about Bitcoin. After August 2015, r/btc gained a large user-base when the[ r/bitcoin](https://www.reddit.com/r/bitcoin/) subreddit[ began censoring](https://np.reddit.com/r/Bitcoin/comments/3h9cq4/its_time_for_a_break_about_the_recent_mess/) discussion about raising Bitcoin’s block size limit. After the Bitcoin community split over the Bitcoin Cash fork in August 2017, the [r/btc](https://www.reddit.com/r/btc/) Bitcoin community naturally became the Bitcoin Cash community, as that’s where its proponents already resided, having been ousted from r/bitcoin by censorship. To this day, [r/btc](https://www.reddit.com/r/btc/) continues to offer a place for open and censorship-free discussion about all Bitcoin forks, with minimal interference by moderators. &#x200B; **So how does** r/BitcoinCash **differ from** r/btc **?** In July 2019, the [r/BitcoinCash](https://www.reddit.com/r/bitcoincash/) subreddit [introduced](https://www.reddit.com/r/Bitcoincash/comments/cckje4/rbitcoincash_subreddit_change_of_moderation/?utm_source=share&utm_medium=web2x) a stricter moderation policy, following requests from the Bitcoin Cash community for an alternative and specific forum for discussing Bitcoin Cash. The intention is to offer a space that is more focused on specifically discussing Bitcoin Cash, as well as one that is free of the ongoing low-effort trolling that frequently takes advantage of [r/btc](https://www.reddit.com/r/btc/)’s principled commitment to free speech. This subreddit now offers all users a choice about the kind of forum that they wish to participate in. The hope is that, without the distractions that threaten to derail discussion on [r/btc](https://www.reddit.com/r/btc/), [r/BitcoinCash](https://www.reddit.com/r/bitcoincash/) may be able to foster a more focused, inclusive, and involved conversation. The moderation logs for r/BitcoinCash are public.
r/bitcoincash
post
r/Bitcoincash
2019-07-24
Z0FBQUFBQm9Delo1clZxVnZmYjNFSTU4SG1Zd0JUSkpIYXc5SkRubEFwXzJ4T0lzdnNqa29aekdlYUtvaHVkUnJoNHNBMHF2cXdrM3F4aW1UbXBBcFBId1NKNDdjZWV0VkFTcUVDV3RLN2dpTmVDdFhUMDBDcHM9
Z0FBQUFBQm9DemFzZ2E5S0RNR0xyU2I1MnVnVkxfZFR0SDRZSlVQTlRGSUU0cW9ZSTJrc0V5UWh6RHpZNU1jOFNXQ1JQSkd6akRNb0Y4ZEVVUFRMMVA4UHBYbEFwbXdCcFJrU0M1bFJlcjJPS3FESmtDbkZzS1MtNmRqeWt4WngxdER6S19UejBTZ21fSndxX1F1VXNFQldLM3N5cnNwU0NaVjB1UFBTYy1IeHIzX1dBZXQ5cDFqdkFkSEs3Y0VYaXhaeHFKMnMyNjJjYWpnM1ZLak1QS0dlR24wUUpzeEFZQT09
[https://greentfrapp.github.io/project-asimov/guide/](https://greentfrapp.github.io/project-asimov/guide/) Hi! I built this guide as part of a 3-month final project in my MSc., which involves communicating AI ethics concepts in a relatable manner. I'm still about 3 weeks away from submission and I'll love to hear any feedback!
r/aiethics
post
r/AIethics
2019-07-28
Z0FBQUFBQm9Delo1eU52ZGdxaXZtXzZRVllXdmFmT1RncXNuNWR1YjhESTVabGlZMmNuR1lqY2pOaHZRZ0UxRkFKc3o1UE1DMXhoQlJrTFB3U0lxSWdGNzVKN3c3WmRGbWc9PQ==
Z0FBQUFBQm9DemFzbE1HN3A4ckwwc1ZvTzZOR0RwQW1GbDBvX1BoNHRramlIMHA0cG1BOWxXb0Y1VUdmT1hGQkxsMUFsWGlmYXZ0Ml9PVVNLTHp1TElFeWlpb2w5THBXOW1vQ0J4dFdJZXFtblQ0QVRkTmxiQ250NXZrazM3bmdlQjJtVzdYdjduME1iM3FkcjNZcHlWZktBUnRmT3dUR3lqZl9oa3Zsd3U0Ymd4QW5rU2ppTHVCVTZjV0NGaVBNV3FFZ2J1bnVpZ2Zt
[https://textanalysisapis.home.blog/2019/08/13/understanding-sentiment-analysis-api/](https://textanalysisapis.home.blog/2019/08/13/understanding-sentiment-analysis-api/)
r/aiethics
post
r/AIethics
2019-08-20
Z0FBQUFBQm9Delo1MDdzU1I2TjJKWUFTelNfRGRrSllWVmh6ZWoxWGh2Z0tWN3RPbmQ1eFBXSTZYUHpIMUIyTTRaYUZ0N1BYVmw3NUxTRGJWNUlpeXhNTWM4SERBN2lqTmc9PQ==
Z0FBQUFBQm9DemFzeTFsYlB3OTB5aHRKU255RGFGSXpTOTVtVnc0SEc4N29SdGUyRnh6RFNBTDFKYVdPVFFfdVhIMk9nVWtabzU0OGlnYlRSWExtOTc3aEV0OGZ5YnhnbV9pMExJZFhxMkVuMGtwUVpvQVEzVmU4d1NCcDhjOVVYZzZYTEFMOW9JMGQySlhhZDlOWXR5RW5tS3V3cDJJZTJPcUFOMngwWXUtcWlqcmVoWVVTSExZcGZycWdfdTlxZlV3aFdlZE1zaGdW
An interesting and well-documented article about a burning matter: how do governments across the globe integrate ethical principles into AI applications, and why is this necessary in serving and protecting societal values. [I recommend giving it a read here](https://www.botxo.ai/blog/ethical-ai-government/) Snippet: >AI is fertile ground when venturing beyond the frontiers of science and technology. However, like any discovery, it is vital that progress in this field does not come at the expense of humans. Scientific developments are to come hand in hand with relevant legislation and liability, to defend against malicious and harmful intent. It is then and only then that society can thrive from the creations that knowledge and research spawn.
r/aiethics
post
r/AIethics
2019-11-14
Z0FBQUFBQm9Delo1VUluZ05VRGtJZmVEYmhPQS05ZUV5ZVhjU3hva3R3eHJEdTVKQXNxVHBSWDZjX1lTNXAxYmNxby1rSms0ai05aDRmeHVJdUVidWZYUFVNSVlaZWZ2VGc9PQ==
Z0FBQUFBQm9DemFzY0FHbDE1blM3a0lueFZNUE5UaUN2TzVDU2t4NHBOQ09XeU1WaDVabzNoNXNMTXFQSVNjUW0zaDNES0lFUU1laUk1MkpSN1ZvOGxyejlnZkJzSi1yM0QwckdhWExQalBkLUFyWnBPX0N6MTJqS2pPMy15Szl1MXpXcGlGNFM2R3l4Tl9NOTRwanRkRG1XZmVkZ1pXOVZPRkh5bWQ4LW5kNW1udUs0TVFSX0QwPQ==
Is there any source of news on how the dark side is doing with AI?
r/aiethics
post
r/AIethics
2019-11-26
Z0FBQUFBQm9Delo1Rm1fS1NOY2ZQXzZyTEdDa0RfQjR3c3JRY1JrMGQwQktIMWNucWlQTGVRTXZoMHJOdE9qYThFOW1OUmdmNHFSOVdIWmprbm1WSklhdDQ0bFdVeWxwM3c9PQ==
Z0FBQUFBQm9DemFzLS1KWENYVHE4TEpSdGlZRTFtTnczNmFGOVlfc2RMQ2pxS05OWHZTYWthUXZiaFhtdjFFQTFlSm10RzlfSDF6MXZxdmc5aDlyVkhrd21GdFlyZURyVDgyN19SWnlnYUl4cjNVNS1SeVNSUDdJLVJoUXJTbWs0YWFqUmVDMU1WVkZZMEU1YjhreDhOcEJ3cmRZQ3Zxbm5tQ3NIc1BzdVZHUEVRRmdnMmJjTGF5azNUYnU0eFhnT0syMmhCNDhub2xw
The European Commission put out this questionnaire [https://www.starts.eu/article/detail/starts-consultation-on-ai/](https://www.starts.eu/article/detail/starts-consultation-on-ai/) where they would like to find out how the arts and artists can positively impact AI and AI research. Maybe someone here would be interested in taking part as well, as the study's outcomes might be relevant to future regulations and fundings in the areas of creating responsible and explainable AIs.
r/aiethics
post
r/AIethics
2019-11-29
Z0FBQUFBQm9Delo1ekFHNV9wa0tVVlFJZjZlSFZjd3ltY0ZhTUE2LW1aVTBYb28tQkJmUUlFRmZfbWQ2SW84NjJvbVlWeFhERmtoUXR4MWZyWWN5UUJmUlA4ZGRXbnNEaUE9PQ==
Z0FBQUFBQm9DemFzNnFCa1NCWVBoUXFQU2M1bE1sbXRoZ05ycTRoOHRPQ2ZoNlNoYmZtcTdpak1QT0hvUjhIb0hjTVdiZ2dRbXVzM3hNMGx2YUtzZ2FYTDdFWDZNeERNdGpYQm5ieGd3SmdqTlRlZ05CcXBjZEZSR3o3R1BTMjBBSlpkRlRjR19ncThrWVRHZzRiZUtmT3NjUXF0dWttMXdqemdjZzVXbUpNcU1Mb2t1RXcwQWtUMFZZSFVyal9XMkhPSkt4azNQY3Ut
So the chief advisor to the UK prime minister put out a rather interesting/disturbing job advert looking for specialists in AI/ML, data scientists amongst others. &#x200B; He lists a bunch of papers focusing on prediction, noted below, that potential candidates should be able to discuss. I am not an AI expert/data scientist. I am wondering what kind of shenanigans the advisor is planning with such a reading list, considering the types of people he is trying to attract. &#x200B; There is also the ethical implications of said interests. If you are British, you may be aware that the chief advisor to the UK pm is not an ethical person. And when we are talking about using prediction there is concern about what kind of abuses this individual will do with such research. &#x200B; **So what are your expert predictions about the type of stuff the UK prime minister will be wanting to predict based on the reading list below? I'm looking for the benevolent, but especially the malevolent possibilities.** The papers: &#x200B; * This Nature paper, [*Early warning signals for critical transitions in a thermoacoustic system*](https://www.nature.com/articles/srep35310), looking at early warning systems in physics that could be applied to other areas from finance to epidemics. * [*Statistical & ML forecasting methods: Concerns and ways forward, Spyros Makridakis, 2018*](http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0194889&type=printable). This compares statistical and ML methods in a forecasting tournament (won by a hybrid stats/ML approach). * [*Complex Contagions : A Decade in Review, 2017*](https://arxiv.org/pdf/1710.07606.pdf). This looks at a large number of studies on ‘what goes viral and why?’. A lot of studies in this field are dodgy (bad maths, don’t replicate etc), an important question is which ones are worth examining. * [*Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach, 2018*](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.024102). This applies ML to predict chaotic systems. * [*Scale-free networks are rare, Nature 2019*](https://www.nature.com/articles/s41467-019-08746-5). This looks at the question of how widespread scale-free networks really are and how useful this approach is for making predictions in diverse fields.  * [*On the frequency and severity of interstate wars, 2019*](https://dominiccummings.com/2019/03/06/complexity-and-prediction-vi-a-model-predicts-the-frequency-and-severity-of-interstate-wars-a-profound-mystery-for-which-we-have-no-explanation/)*.* ‘How can it be possible that the frequency and severity of interstate wars are so consistent with a stationary model, despite the enormous changes and obviously non-stationary dynamics in human population, in the number of recognized states, in commerce, communication, public health, and technology, and even in the modes of war itself? The fact that the absolute number and sizes of wars are plausibly stable in the face of these changes is a profound mystery for which we have no explanation.’ Does this claim stack up? * The papers on computational rationality below. * [The work of *Judea Pearl*](https://dominiccummings.com/2018/05/21/technology-the-state-of-the-art-in-ai-causality-and-hypotheticals/), the leading scholar of causation who has transformed the field. &#x200B; The "job advert": [https://dominiccummings.com/2020/01/02/two-hands-are-a-lot-were-hiring-data-scientists-project-managers-policy-experts-assorted-weirdos/](https://dominiccummings.com/2020/01/02/two-hands-are-a-lot-were-hiring-data-scientists-project-managers-policy-experts-assorted-weirdos/))
r/aiethics
post
r/AIethics
2020-01-03
Z0FBQUFBQm9Delo1Y0RKSmRxRk85VEYxV0lRNWhPanBoT29KNHBJOUlmTnZwMzFpaVlFVm45b0JLVWxaUUhxeFNucWJHbWdGZXdYTldRakZUbXRuc19haU1ER0NQdEN6Umc9PQ==
Z0FBQUFBQm9DemFzUGxRdFY0ZWJ4a2lNM1J6VXpNX1plck1rSTRKck9qUzR6eW5vRVViaGxXWTNKcVJIV3Bsb0RCOUJvMm9tOFNGYzA5OHFpTk1TRUVTRFgzbWFzSlpVOHprOVBXWGE4WFdfUkJhLTlEWERUVnNoZ1M3MFJsbG5fZXJrNFNfTllIcm02OW9WVXdkVThub0pCOVkzeFhINXRCUUNGTUZFTURsWXNvSU12UV8xamR5VlJOZEFpTGZ4TUZ1V1k1RXRlcnlEV29VTUpObU9RZGNpMWI0RjZuYW9fQT09
There has been a slow and steady influx of unwanted and misguided conversation plaguing our boards over the last year or so. I don't think this is a surprise to any of you all. While we ultimately encourage healthy discussion around both the positives and negatives of dating the overall spirit of this sub has been lost. Many of our readers have expressed their concern to our moderation team and we honestly feel the same way. Our "No Soap-boxing or Promoting an Agenda" rule has always been on the sidebar for our users to see but I want to stress our current stance on the topic. **Soap-boxing will and has always included red/black-pill ideology, "alpha-male" talk, and the subset of vocabulary that comes with it.** This means that using our board to preach about how there is no hope for men (or women) who are conventionally unattractive is unwanted and will be removed. Using our board to discuss how you think women are shallow and will only choose the top percentage of men is unwanted and will be removed. Even just a mention of the term "Chad" is unwanted and will be removed. We can sympathize that dating is difficult and is even more difficult for people that might not be the prettiest. It's no secret to anyone. What we value though is genuine discussion and helping those who actually want and need it. The countless misogynistic threads about how women and society aren't fair to men are toxic and don't do anyone any favors. There are better subreddits that would love to discuss these types of concerns with you in a more healthy way. Misandry is as equally intolerable. At the end of the day let's lift each other up. Let's share our experiences and learn and/or laugh from them! Ask the questions that need to be asked. But let's not lose sight of what dating is really about. **EDIT: If you do see any rule breaking behavior please report so that we can take action. It's hard to see every comment. Thanks!**
r/dating
post
r/dating
2020-01-21
Z0FBQUFBQm9Delo1UmlDUndNdFJRQXF2UkgxVDdaODlVRG1sSnhMS1VweEhkTmtvd0JvdVlQck1qeGJXNGxfWkdGMnhfNnZkdEQ5ZXhjV3VQaFRFRjVVNElaQ1E3bVNQcFE9PQ==
Z0FBQUFBQm9DemFzQk8tY3lHRjJSQkw5Zk5XT0VLVkRrb1Jzb1Q0TDZWYWYybEhDS3JwR1ZiN1RuWTBRZWpOUEs4aVV6MXZTQWQ4SkFSckZOcFlvbW5hZnlpek5FbmtyTEtYR3FGdmdyNWpzNHlfWURjQTB3dWNtMGljVDJRMTBDSFRIYWxpUjI1OGQ0MkhhcXlWT1h1a0dYNUJlREJVWkU2YUpNdWtvTlZNd1kxWmk0LUxRZl9RTUZjNkxMVGZGcFBIOV9XWTlCNzFE
And is it a stretch to predict that ML could be used to refine and evolve QC? So QML speeds up ML, and ML refines QC. Is this one way where SAI could evolve? Obviously, mostly conjecture at this time, but fascinating! https://www.quantaneo.com/How-may-quantum-computing-affect-Artificial-Intelligence_a391.html Also, apparently it takes (at this time) 53 qubits to beat the world's fastest supercomputer: https://bigthink.com/technology-innovation/google-quantum-computer Just how relevant is the Ethics Question? While we sit and gaze at our navels, the bubble we find ourselves in could be rapidly decreasing! Seriously, all I would wish for is to be a fly on the (cloud) wall for the next few centuries . . . Iacoca used to say Lead, Follow or Get out of the Way. My sense is Merge/Uplink, or become Extinct.
r/aiethics
post
r/AIethics
2020-01-23
Z0FBQUFBQm9Delo1el81Zm1zelJsckhWaWhkRk9vY0lndGxTUFZiUDNCYncxd2Zsd2dwWDVRei1Lc0JJaVB0Vl9LSUZYZE5xNzVXM1ZGYXIyVTZBbXVIdmp0U3pwNnpVX3c9PQ==
Z0FBQUFBQm9DemFzR3BsZjBIR1ZzTTFUc2V2SzFDUlBldE1QWTFnOGthOXBXcDJ4WE9xZy1uT3VFaXJTUm95WUIzMFdWakNKQk5vbDAxczVwMmFMSEpHZXVnSmRlZng4ajg3QlFpaUlveENFWUpMNE5SaUhKVmNsMURDaGRKOXptZGtjWW9IaW5KMEx6ZlRLZDZPTE0ycFQ2UWlFbkhCN1h6cldidUZ1ZXRZd3NLVk9RRG45Mi1uMXp0M0U3WXNrUGRlQXdQQ3RKMk4wQm5aY0JMMEkweC1La0VfcFVFUWtvdz09
I have been thinking about AI and ethics lately. Some countries show commitment to the responsible development of AI. For example, Denmark does its best to make[ AI projects human-centric](https://www.botxo.ai/blog/danish-ai-denmark-digital-dream/). The implementation of AI is based on equality, security and freedom. Do you think that other countries can follow the Danish model?
r/aiethics
post
r/AIethics
2020-02-07
Z0FBQUFBQm9Delo1SVFULW81cWd2RFFPZndZNmp1RkxwN3VUUUM3c3JObkpBSkZvZ0tLS3gyNHZZRWgyTXlxbHo5VlVpaDNRa3cyVzBkb0hHMHZXUHNMQmpjNUxRTXVxZEE9PQ==
Z0FBQUFBQm9DemFzamtqeDh6TktkeDM4X3hlaFpiOUJycGdQQlMwdkJ5NVJOMGJJUGVrOXI5Y1NvQnhwQ1ZHNmxDRnFyMFVnZ3dpLVpuamtnc1p4b0dfYURYNWxZLTZpWTBvVUlRWTdsTHFTd1YzdEY3OTdFc2kwcG85MXl3ajBxaXZGN3EwRVpucVlvUjFNd0ZfSk04eXNLaHBEdEZ4VHBfU2hyREhLSWFoRDgtUlRta0kwRFp0RUpRVGtYQjBncy1EajVGZHY1UjM1VThtVHpuZGRDMDBYcV9vVjdRM2kwQT09
Greetings! I am in need your help with a class project. If you have 3-minutes to [complete this survey](https://docs.google.com/forms/d/e/1FAIpQLSfGviF2yZBhJW3DyVxUpwKjSYcNtNhC3s3TsY95SUrWgjOMRQ/viewform?usp=sf_link). I am exploring the topic of human-like agents ( i.e., Siri, Google Assistant). I am only using this data for a class project it will not be published. I am willing to answer any questions. Your support is greatly appreciated!
r/aiethics
post
r/AIethics
2020-02-22
Z0FBQUFBQm9Delo1WmNCazIwSmpKcDZSUGJHazJRR0o3T3AtOTl3ZkRHNEJ0SjhnWDFyb21uVFFGeUJrWHphUVh2Y3dzZl9KOWFFNjB2YjVIY3FjdjFMZTdSekpyS3gxUEE9PQ==
Z0FBQUFBQm9DemFzeUgwWWRyZUdjRUlzWHdmRWppMzJBQ2Y1YThHN2JZV3o5dzUxdk5KV3VfT1lDbjNWZHhyWHYtRENTNzVFanhfbmpWZUlpdExLNlV4UkV0NXFyeWhpM1hQQ1licmxjS1RvaTNCdVJqMmRhclVaNlM2TURlQi1mdnNlY0RUdHVuUFhYTGY2TFdCZnhyWm9ZT3BQalhfamc0aVJYM09jTmRXc0w3WncxQmFIaXBpNEUzOVhuVnB5RWRRQ0QzeWI5blN3UUxOaF9tNUFyTHJQTk84dGVaWFBMZz09
Final damage screenshot seconds before account was liquidated: https://i.imgur.com/e0sEWEm.jpg Thanks to me UPRO and TMF now are 90% stress tests on TOS, no margin reduction credit, and from 36% and 24% stress tests respectively. Or maybe I'm on reg-t when I took the screenshot, IDK and IDC. Talking with risk management apparently I flew under the radar as they didn't see a margin balance due to the box spread until other account alerts went off as customer service will take a look in when anyone is negative 1 million or more PnL as a courtesy to chat with their clients. Needless to say customer service was horrified and I got another margin phone call to wire in $1,250,000 in the next five minutes or they'd liquidate. I guess they give Portfolio Margin customers a little bit more leeway... I took the five minutes to grab this one final screenshot. I'm hoping for some bailout money from coronavirus too. I talked with the bankruptcy lawyer that set me up with the asset protection plan and he already dropped me as a client. I never imagined beer-virus would do this to me. I'm gonna take some time to just not think about the virus or anything else. # TL;DR what strike/put/call/etc I discovered a bug in my broker's risk management software. I guess buy RCL calls per my previous DD. Edit: Previous post entering the trade and proof of portfolio margin/etc: https://www.reddit.com/r/wallstreetbets/comments/fepd4q/portfolio_margin_is_10x_worse_than_u1r0nymans_box/
r/wallstreetbets
post
r/wallstreetbets
2020-03-19
Z0FBQUFBQm9Delo1ZVg4NXRGcFdtXzltNlFQaXhVNUdZc0N2YmpVSTU4Sk1MX2N1dEdtOEY3aEF2OUZDZG5DeWg4ODJucUtseUFOS0JySXFYWlI2NVZwQUQzMjVlaHJ4RXc9PQ==
Z0FBQUFBQm9DemFzMmZIMVdwWGZhYXd0eUpaYWpnZUNUajlQaE14U1ZyekpSR2dSUXV1Qm1IZ0hrSVdTc0ZtUmdmVG5kVjNCZllNbThLenBfM0N4TEZZNFpFV05KeEc3M0taNWRDQU9DT0tyTkJONkZKa2g3TTJZUXZSUWx2MGp5M0FRZzhXLUdZVlZMeXJCMDRhRWg5WVJSQXAyM21uRGxqbF94ZHdzZExwclFBWnNoMkZaNlFxYTlrUUJmS0lZZWN1Y3FyZEpVSzc3OTAxbFdjTmc3S0xTay1LNUlvYW9Bdz09
what ever you are thinking right now, do the opposite. when there is good news, buy puts. when there is bad news, buy calls. the market has figured out the ultimate wsb strat, that is, to inverse wsb. thus we must go 1 step further and inverse the inverse
r/wallstreetbets
post
r/wallstreetbets
2020-03-26
Z0FBQUFBQm9Delo1emZOWFl5ME5YeW1fZXQ3b1ZHX3AwUm9LWFBHLWNxeFh5RWQwazByTDVab05hS3Z5YWhyY1lfQXpCemNIYVhyTFhTVklia3NMRmVjcTBFVHpOc1IzVWc9PQ==
Z0FBQUFBQm9DemFzbFUxLW5vcTVxVmJVaVVDbWFvdGxYUjVzNDF3VkdKZzJLWEtacms0SEw3RTVYakdyYXpUQWxOb193NWVPWjhFd0YyTk9xd1ZTQ2R2RGlFX2ladTZYa2JqTzZ6eFE4Qlhhb1h6eFV1N3B0YUhaMmM1OTZYdkRFT3JtejdwUFpqM0lxMnZpVFkwYVR0RVNKTXJucGIzdEZ6cFZXRkZDYjVSMXJuaGRKazdOQTdickFNMUtvd3NCTWNma0UzV0NtOFV0U3huMUY4b09sd0JMeEM1OUplZTU4QT09
Currently reading Fahrenheit 451 and I'm at the section where Beatty stops by Montag's home the day after burning the old lady in her home. He begins to tell Montag how society slowly changed from the old world to the world the story exists in. This snippet in particular floored me: >You can’t build a house without nails and wood. If you don’t want a house built, hide the nails and wood. If you don’t want a man unhappy politically, don’t give him two sides to a question to worry him; give him one. Better yet, give him none. Let him forget there is such a thing as war. If the government is inefficient, top-heavy, and tax-mad, better it be all those than that people worry over it. Peace, Montag. Give the people contests they win by remembering the words to more popular songs or the names of state capitals or how much corn Iowa grew last year. Cram them full of noncombustible data, chock them so damned full of ‘facts’ they feel stuffed, but absolutely ‘brilliant’ with information. Then they’ll feel they’re thinking, they’ll get a sense of motion without moving. And they’ll be happy, because facts of that sort don’t change. Don’t give them any slippery stuff like philosophy or sociology to tie things up with. That way lies melancholy. Any man who can take a TV wall apart and put it back together again, and most men can, nowadays, is happier than any man who tries to slide rule, measure, and equate the universe, which just won’t be measured or equated without making man feel bestial and lonely. I know, I’ve tried it; to hell with it. So bring on your clubs and parties, your acrobats and magicians, your daredevils, jet cars, motorcycle helicopters, your sex and heroin, more of everything to do with automatic reflex. If the drama is bad, if the film says nothing, if the play is hollow, sting me with the Theremin, loudly. I’ll think I’m responding to the play, when it’s only a tactile reaction to vibration. But I don’t care. I just like solid entertainment. It's honestly frightening how glimpses of this paragraph resemble the modern world. At the expense of landing on r/imverysmart I think today's society exudes a lot of what Beatty is saying here. The part about covering up bland stories with dramatic music to mask it's superficial nature made me laugh because it reminds so much of how modern entertainment is consumed for emotions instead of ideas (I'm very guilty of this haha). Sorry if this isn't what this sub if for, I just wanted to share a great passage.
r/books
post
r/books
2020-07-25
Z0FBQUFBQm9Delo1b0MzV0ptWnRINTZ3Q3dYWmFPa3pXY3Z3U1prUVR2NVNkNXE2MkxtZnpxR0FhV01MMXB6MVZoc0hURnJYaF9UU2VreHQ4Z3lPd0xJVDZ4dlJtTFdaUVE9PQ==
Z0FBQUFBQm9DemFzZ0l5ajZHZlNOU1ZESVJGSWg0RGUxQWxScldmYkFMS3VVT05pT091M0VKeTZySXU3TU8wN1hGUGxPcmM1X2xIc25CaXc4aHBEZzJzLUtDNlFFYW1rOVhrS1AyckdsNXkzTDZGVXhUUGo0TDFyZENNaHFMR0ptOFlSVEpJY08zbmpibElDelRZUVNHYl9GUXVSc29KTEVMUTZfckNHLXNoWHJIaWVSZmhXNDJpS0RvM29fY0N0alNyWVYtOXIxR1RMakZKc0s2STgtclNHLTQ3ZGIwQ1FSUT09
hello, I am CS student who knows advanced math (such as functional analysis) and who wants to make research in reinforcement learning, but unfortunately, I am a starter and need help to learn things in this field and in mathematics. Unfortunately, no one in my university is interested in reinforcement learning and so they can't help me. My question is do you know any school(especially free) or university or some other educational center that teaches you how to make research in this field?(if not RL then CV or NLP is okay too) or are there any useful links or resources that can help me to study by myself?
r/airesearch
post
r/airesearch
2020-07-28
Z0FBQUFBQm9Delo1dkVjVFNqWXhQZU03RTc0aTBOc2hzVjNiWXJWM1ViWVlyWEJtRUtCZ1FMbHFidF9BMXg1QjA3SWF4M3BHazZBODB0eWotMGhVZE5Rb0luOTFUSkR6cWc9PQ==
Z0FBQUFBQm9DemFzaGJSY3pXdm1xVmlQVzdDT1JtZXY5M0JrLVFZNllOcE1TTWMwbXU0TFBaSE1veW5wV2ZFZ25hS0RGM0xwSHpjc1lDSWFJRlhQU21ndktKZnFjZWxzOURIUUxfbUk4X2dpRklabmpkb2JMblhYcmp4MFFqSXdoMDdOSkQ2Zi1BcjYxMFBlbnYxU3FMMUNYRGlOZmxfdFA1WUZkNWlpT09YdjEtU3Y4QVo5SGF3X2dBN2FNVUljbXZDRHlia1ZyUklJ
This FAQ and information thread serves to inform both new and existing users about common Bitcoin topics that readers coming to this Bitcoin subreddit may have. This is a living and breathing document, which will change over time. If you have suggestions on how to change it, please comment below or message the mods. ----- **What is \/r/btc?** The \/r/btc reddit community was originally created as a community to discuss bitcoin. It quickly gained momentum in August 2015 when the bitcoin block size debate heightened. On the legacy \/r/bitcoin subreddit it was discovered that moderators were heavily censoring discussions that were not inline with their own opinions. Once realized, the subreddit subscribers began to openly question the censorship [which led to](http://archive.is/0G8az) thousands of redditors being banned from the \/r/bitcoin subreddit. A large number of redditors switched to other subreddits such as /r/bitcoin_uncensored and /r/btc. For a run-down on the history of censorship, please read [A (brief and incomplete) history of censorship in /r/bitcoin by John Blocke](https://medium.com/@johnblocke/a-brief-and-incomplete-history-of-censorship-in-r-bitcoin-c85a290fe43) and [/r/Bitcoin Censorship, Revisted by John Blocke](https://medium.com/@johnblocke/r-bitcoin-censorship-revisited-58d5b1bdcd64). As yet another example, \/r/bitcoin [censored 5,683 posts and comments](https://www.reddit.com/r/noncensored_bitcoin/comments/7414nf/september_2017_stats_post/) just in the month of September 2017 alone. This shows the sheer magnitude of censorship that is happening, which continues to this day. [Read a synopsis of /r/bitcoin](https://www.reddit.com/r/BitcoinMarkets/comments/6rxw7k/informative_btc_vs_bch_articles/dl8v4lp/) to get the full story and a complete understanding of [why people are so upset](https://www.reddit.com/r/KarmaCourt/comments/5gvqf6/and_now_for_something_completely_different_the/) with \/r/bitcoin's censorship. Further reading can be found [here](https://www.reddit.com/r/btc/comments/83vgdm/a_collection_of_evidence_regarding_bitcoins/) and [here](https://www.reddit.com/r/btc/comments/cpftea/its_important_to_remember_the_past_and_show/ewp7abj/) with a giant collection of information regarding these topics. ----- **Why is censorship bad for Bitcoin?** As demonstrated above, censorship has become prevalent in almost all of the major Bitcoin [communication channels](https://www.reddit.com/r/btc/comments/5mxov4/wtf_north_koreas_cgc_tries_to_control_bitcoin/dc78hwl/). The [impacts of censorship in Bitcoin](https://www.reddit.com/r/btc/comments/5cxx8t/the_impacts_of_censorship/) are very real. "Censorship can really hinder a society if it is bad enough. Because media is such a large part of people’s lives today and it is the source of basically all information, if the information is not being given in full or truthfully then the society is left uneducated [...] Censorship is probably the number one way to lower people’s right to freedom of speech." By censoring certain topics and [specific words](https://www.reddit.com/r/btc/comments/5vr7ij/partial_list_of_words_that_automod_removes_from/), people in these Bitcoin communication channels are literally being brain washed into thinking a certain way, molding the reader in a way that they desire; this has a lasting impact especially on users who are new to Bitcoin. Censoring in Bitcoin is the direct opposite of what the spirit of Bitcoin is, and should be condemned anytime it occurs. Also, it's important to [think critically](https://www.reddit.com/r/btc/comments/5txje0/edward_snowden_the_answer_to_fake_news_is_not/) and [independently](https://www.reddit.com/r/btc/comments/9cm3nj/psa_the_sub_has_been_under_attack_by_various_bad/), and have an open mind. ----- **Why do some groups attempt to discredit \/r/btc?** This subreddit has become a place to discuss everything Bitcoin-related and even other cryptocurrencies at times when the topics are relevant to the overall ecosystem. Since this subreddit is one of the few places on Reddit where users will not be censored for their opinions and people are allowed to speak freely, truth is often said here without the fear of reprisal from moderators in the form of bans and censorship. Because of this freedom, people and groups who don't want you to hear the truth with do almost anything they can to try to stop you from speaking the truth and try to manipulate readers here. You can see many cited examples of cases where special interest groups have gone out of their way to attack this subreddit and attempt to disrupt and discredit it. [See the examples here.](https://old.reddit.com/r/btc/comments/9cm3nj/psa_the_sub_has_been_under_attack_by_various_bad/) ----- **What is the goal of \/r/btc?** This subreddit is a diverse community dedicated to the success of bitcoin. \/r/btc honors the spirit and nature of Bitcoin being a place for open and free discussion about Bitcoin without the interference of moderators. Subscribers at anytime can look at and review the [public moderator logs](https://modlogs.fyi/r/btc). This subreddit does have [rules](https://www.reddit.com/r/btc/wiki/index#wiki_rules_for_this_subreddit_.28also_in_the_sidebar.29) as mandated by reddit that we must follow plus a couple of rules of our own. Make sure to **[read the /r/btc wiki](https://www.reddit.com/r/btc/wiki/index)** for more information and resources about this subreddit which includes information such as the benefits of Bitcoin, how to get started with Bitcoin, and more. ----- **What is Bitcoin?** Bitcoin is a digital currency, also called a virtual currency, which can be transacted for a low-cost nearly instantly from anywhere in the world. Bitcoin also powers the blockchain, which is a public immutable and decentralized global ledger. Unlike traditional currencies such as dollars, bitcoins are issued and managed without the need for any central authority whatsoever. There is no government, company, or bank in charge of Bitcoin. As such, it is more resistant to wild inflation and corrupt banks. With Bitcoin, you can be your own bank. Read the Bitcoin whitepaper to further understand the schematics of how Bitcoin works. ----- **What is Bitcoin Cash?** Bitcoin Cash (ticker symbol: BCH) is an updated version of Bitcoin which solves the scaling problems that have been plaguing Bitcoin Core (ticker symbol: BTC) for years. Bitcoin (BCH) is just a continuation of the Bitcoin project that allows for bigger blocks which will give way to more growth and adoption. You can read more about Bitcoin on [BitcoinCash.org](https://bitcoincash.org/) or read [What is Bitcoin Cash](https://www.bitcoin.com/info/what-is-bitcoin-cash) for additional details. ----- **How do I buy Bitcoin?** You can buy Bitcoin on an exchange or with a brokerage. If you're looking to buy, you can [buy Bitcoin with your credit card](https://buy.bitcoin.com/) to get started quickly and safely. There are several others places to buy Bitcoin too; please check the sidebar under brokers, exchanges, and trading for other go-to service providers to begin buying and trading Bitcoin. Make sure to [do your homework first](https://www.bitcoin.com/get-started/how-to-choose-the-right-bitcoin-exchange/) before choosing an exchange to ensure you are choosing the right one for you. ----- **How do I store my Bitcoin securely?** After the initial step of buying your first Bitcoin, you will need a Bitcoin wallet to secure your Bitcoin. Knowing which Bitcoin wallet to choose is the second most important step in becoming a Bitcoin user. Since you are investing funds into Bitcoin, choosing the right Bitcoin wallet for you is a critical step that shouldn’t be taken lightly. [Use this guide to help you choose the right wallet for you](https://www.bitcoin.com/get-started/how-to-choose-the-right-bitcoin-wallet/). Check the sidebar under Bitcoin wallets to get started and find a wallet that you can store your Bitcoin in. ----- **Why is my transaction taking so long to process?** Bitcoin transactions typically confirm in ~10 minutes. A confirmation means that the Bitcoin transaction has been verified by the network through the process known as mining. Once a transaction is confirmed, it cannot be reversed or double spent. Transactions are included in blocks. If you have sent out a Bitcoin transaction and it’s delayed, chances are the [transaction fee](https://bitcoinfees.cash/) you used wasn’t enough to out-compete others causing it to be [backlogged](https://jochen-hoenicke.de/queue/#0,24h). The transaction won’t confirm until it clears the backlog. This typically occurs when using the Bitcoin Core (BTC) blockchain due to poor central planning. If you are using Bitcoin (BCH), you shouldn't encounter these problems as the block limits have been raised to accommodate a massive amount of volume freeing up space and lowering transaction costs. ----- **Why does my transaction cost so much, I thought Bitcoin was supposed to be cheap?** As described above, transaction fees have spiked on the Bitcoin Core (BTC) blockchain mainly due to a limit on transaction space. This has created what is called a fee market, which has primarily been a premature artificially induced price increase on transaction fees due to the limited amount of block space available (supply vs. demand). The original plan was for fees to help secure the network when the block reward decreased and eventually stopped, but the plan was not to reach that point until some time in the future, [around the year 2140](https://wiki.bitcoin.com/w/Controlled_supply#Projected_Bitcoins_Long_Term). This original plan was restored with Bitcoin (BCH) where fees are typically less than a single penny per transaction. ----- **What is the block size limit?** The original Bitcoin client didn’t have a block size cap, however was limited to 32MB due to the Bitcoin protocol message size constraint. However, in July 2010 Bitcoin’s creator Satoshi Nakamoto introduced a [temporary 1MB limit](https://sourceforge.net/p/bitcoin/code/103/tree//trunk/main.h?diff=515630145fcbc978e39dbaa5:102) as an anti-DDoS measure. The temporary measure from Satoshi Nakamoto was made clear three months later when Satoshi said the block size limit can be [increased again by phasing it in](https://archive.is/L5yvP#selection-3315.0-3315.7) when it’s needed (when the demand arises). When introducing Bitcoin on the cryptography mailing list in 2008, Satoshi said that [scaling to Visa levels](https://www.mail-archive.com/cryptography%40metzdowd.com/msg09964.html) “would probably not seem like a big deal.” ----- **What is the block size debate all about anyways?** The block size debate boils down to different sets of users who are trying to come to consensus on the best way to scale Bitcoin for growth and success. Scaling Bitcoin has actually been a topic of discussion since Bitcoin was first released in 2008; for example you can read how Satoshi Nakamoto was [asked about scaling here](https://www.mail-archive.com/cryptography%40metzdowd.com/msg09964.html) and how he thought at the time it would be addressed. Fortunately Bitcoin has seen tremendous growth and by the year 2013, scaling Bitcoin had became a hot topic. For a run down on the history of scaling and how we got to where we are today, see the [Block size limit debate history lesson post](https://www.reddit.com/r/btc/comments/61mxuj/block_size_limit_debate_history_lesson/). ----- **What is a hard fork?** A hard fork is when a block is broadcast under a new and different set of protocol rules which is accepted by nodes that have upgraded to support the new protocol. In this case, Bitcoin diverges from a single blockchain to two separate blockchains (a majority chain and a minority chain). ----- **What is a soft fork?** A soft fork is when a block is broadcast under a new and different set of protocol rules, but the difference is that nodes don’t realize the rules have changed, and continue to accept blocks created by the newer nodes. Some argue that [soft forks are bad](https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7) because they trick old-unupdated nodes into believing transactions are valid, when they may not actually be valid. This can also be defined as coercion, as [explained by Vitalik Buterin](https://vitalik.ca/general/2017/03/14/forks_and_markets.html). ----- **Doesn't it hurt decentralization if we increase the block size?** Some argue that by lifting the limit on transaction space, that the cost of validating transactions on individual nodes will increase to the point where people will not be able to run nodes individually, giving way to centralization. This is a false dilemma because at this time there is no proven metric to quantify decentralization; although it has been shown that the [current level of decentralization will remain](https://www.reddit.com/r/btc/comments/5fyve1/fallacy_the_key_to_bitcoins_decentralization_is_a/) with or without a block size increase. It's a logical fallacy to believe that decentralization only exists when you have people all over the world running full nodes. The reality is that only people with the income to sustain running a full node (even at 1MB) will be doing it. So whether it's 1MB, 2MB, or 32MB, the costs of doing business is negligible for the people who can already do it. If the block size limit is removed, this will also allow for more users worldwide to use and transact introducing the likelihood of having more individual node operators. [Decentralization is not a metric, it's a tool or direction](https://twitter.com/lopp/status/693537120351293440). This is a good video describing the direction of [how decentralization should look](https://www.youtube.com/watch?v=7S1IqaSLrq8). Additionally, the effects of increasing the block capacity beyond 1MB [has been studied](https://www.cryptocoinsnews.com/cornell-study-recommends-4mb-blocksize-bitcoin/) with results showing that up to 4MB is safe and will not hurt decentralization ([Cornell paper, PDF](http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf)). Other papers also show that no block size limit is safe ([Peter Rizun, PDF](https://www.bitcoinunlimited.info/resources/feemarket.pdf)). Lastly, through an informal survey among all top Bitcoin miners, many agreed that a block size increase [between 2-4MB is acceptable](http://archive.is/U3dqr). ----- **What now?** Bitcoin is a fluid ever changing system. If you want to keep up with Bitcoin, we suggest that you subscribe to /r/btc and stay in the loop here, as well as other places to get a healthy dose of perspective from different sources. Also, check the sidebar for additional resources. Have more questions? [Submit a post](https://www.reddit.com/r/btc/submit?selftext=true) and ask your peers for help! ----- Note: This FAQ was originally posted [here](https://www.reddit.com/r/btc/comments/9lfjrb/frequently_asked_questions_and_information_thread/) but was removed when [one of our moderators was falsely suspended](https://www.reddit.com/r/btc/comments/jp0a0k/bitcoinxio_has_been_suspended/) by those wishing to do this sub-reddit harm.
r/btc
post
r/btc
2020-11-11
Z0FBQUFBQm9Delo1TXNvYlUzSHdPVWhBWmhVMUZqZk1DNFpnQVIzdFpnb0dJTUNlcGI5VjN3a295ZUpYU19UMGZIYWZ0VTdVa2tlTl9mVmN0b2wyZDBPRS1GbHNaNmIzM2t2TWFBUUpiRWo4WWZxcElKNXNETzQ9
Z0FBQUFBQm9DemFzNkpTSkpLdkxLSHpVTjllcVEzTkczNXFhb0V0bVNpaWdyeUtGWFg3cFNxTXRHbGtvZEkxaGZXTm1PUzZ4OEFwdFZUNG03VkdPT2h5QW1GeHkxR2NuWU02UzVFLVVFaXBsQVRMSUVrZnE5M0o5OE8yb25wZTRtSS1Da1ZaM2hVX25qMWt6T1ZsRUJKRXFnS1FaUnE2aThVMTBNcmtKb3N3RUl6ZHNrSVc0RE9YZjh5bVlSNldwTC1TcUhXcGFqMVVH
Hello, If you want to keep bagholding meme stocks, buying FDs or YOLOing to zero, this post isn't for you. **This is a portfolio for those who want to make serious money.** [Here's the thinking behind it](https://www.reddit.com/r/wallstreetbets/comments/ibhpo2/psa_leverage_margin_and_proper_diversification/). And [here's my previous post on it](https://www.reddit.com/r/wallstreetbets/comments/ka58dl/high_leverage_makes_money_up_another_120k/). The process is straightforward: 1) Buy a globally-diversified portfolio of smart-beta ETFs (a lot of value, quality and momentum firms). 2) Leverage the portfolio to the optimal, mathematical number that produces highest returns (about 2:1 leverage). 3 - Bonus) Sell SPX boxes to get dirt-cheap financing rates for your leverage (\~0.5% borrowing rate). Since I started last year, portfolio has returned \~$240K (mostly as unrealized gains): https://preview.redd.it/ep4jvsgqqz961.png?width=2558&format=png&auto=webp&s=4897f267f6ea710c342da459c959bb4f9e6a0f61 That's a cumulative return of 57.3%, or **about 98% annualized**: https://preview.redd.it/u6c2t3s4rz961.png?width=2778&format=png&auto=webp&s=2c1733ca2af6679b4a3558120f73b92a712c467c Positions https://preview.redd.it/vo0rrch4sz961.png?width=3082&format=png&auto=webp&s=918b9c2df7d2b86138e2aacc2b9542655ed202bd We're starting to see International and various smart-beta factors come back after severe underperformance. So while USA stocks are expensive, these positions have plenty of return potential (i.e. are still cheap). It is by no means too late to enter; in fact, the future is still bright for this portfolio going forward.
r/wallstreetbets
post
r/wallstreetbets
2021-01-07
Z0FBQUFBQm9Delo1S2pUTEMxdjRWV3dzVE1UUmFaM1BjanBlM2NtNW05bTRRTlFuSzhocUZKVFFoWG12SXlRRXZjV0pibzFWUnRNWmlMX0ppTlZTWE5FbUNxNkRHdmE0aUE9PQ==
Z0FBQUFBQm9DemFzVWRHd0NLcm5iQnhpRnBJMWptay1SWG5PWEZCQ0pnNk9oQ1pFbHhBMnlHVFRwU205NWduQlo1MEZac3Z6RW9YVGhIRlhrVy1kWmJmZ21rMU51Z1dkRXhnbTBvZUt0RlBnMHVaODFES0E3SzVDNkVqT1pTRmtIazhqX2pZVEZXZ1Mwa3VpTVVHV1VQZHNoZEY3czl4NVlEUmx1OGpwUy1zcnlnN0huWjJoVE9sS2QyTjJ0U0NqcVA2UU9QSkNBYzA0bmVtd2NsaE43aUFzWjFRM2h1YWRrQT09
How does one go about becoming an AI ethicist? Better yet, what is the best way/are better ways to go about becoming an AI ethicist? I didn't see many consistent suggestions elsewhere online and didn't see anything on Reddit, so I thought I would give it a go. &#x200B; To preface: What are the worst and best reasons to want to become an AI ethicist? &#x200B; Education: \*What educational pathway would be ideal? \*Past graduating high school, and seeing as there are not many AI ethics programs that exist in the academic world, what would be a good major(s) for an aspiring AI ethicist? \*I assume more likely answers would include Computer Science, Philosophy, Operations Research, Mathematics, or one of the few new specialized AI Ethics programs as they start to appear? \*Furthering similarly, would you expect or suggest that an aspiring AI ethicist consider graduate education? If so, Masters? Law School? PhD? Combination? &#x200B; Experience: \*During or after education, where would you suggest an AI ethicist find work? Academia? Public Sector? Private? Non-Profit? \*Would you suggest titles to look for other than "AI Ethicist"? What are hot topics to focus on in AI Ethics right now? \*What would help a prospective ethicist stand out to land the job? \*What should a professional ethicist be focused on to stand out among his peers? \*Should I plan on living somewhere particular to land these jobs? Is remote work here to stay enough that I shouldn't worry? &#x200B; Future: \*What's next for AI ethics; what's the next big thing in AI ethics to look forward to/get a head start on? \*What do you project the growth of this occupation to be? Growing? Declining? Quickly? Slowly? \*Is it worth focusing on trying to achieve or should I set sights on a different role and purposefully or incidentally end of with the AI Ethicist title? &#x200B; Would there be role models you suggest studying for this role? \*As of late, it is a little harder to find resources regarding anyone but Google's recently fired ethicists, as they consume Google's entire results feed. I did find a few Orgs that appeared to be more reputable in the field, would you suggest them as organizations worth following? (or of course, please suggest your own): \*The Ethics and Governance of Artificial Intelligence Initiative (Harvard + MIT) \*Harvard Berkman Klein Center for Internet & Society \*Oxford Future of Humanity Institute (FHI) The Centre for the Governance of AI (GovAI) \*AI Now Institute at NYU (AI Now) \*Algorithmic Justice League \*Data & Society Research Institute \*OpenAI \*IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems \*Partnership on AI (full name Partnership on Artificial Intelligence to Benefit People and Society)
r/aiethics
post
r/AIethics
2021-02-23
Z0FBQUFBQm9Delo1UHVldGh2eUNoaDlsZHpjaTZER3YyS2xJXzB6eHBuRFZNYy1jOFVCbnQ4N1VITWZ4QmhCUjd0UTFfRm9CTHVUdm51VnE2Q3J0TnMyZlF2WFNMWTk1UHc9PQ==
Z0FBQUFBQm9DemFzdklVMHpCaEdCU0tpcVJ5SWtNWDBRX042MEs0OC11UktKSTFpdUgyLVBDNURRbTJ5TjRjc1hXcS1fVGFONGlzRHdRaEhYTXJPMXZtU0xOUG9CYnM2NHhTSG5Fa19sN1Zyd0FHMHBaX3ktWk5ESlBlYWtwcWFkR01LT3VNQ21RbHQ3Z2YxTk5rV1QzVmdDMW1HLWE4d1JnTFZHZmxaZE5TQzlNY3hJY21hbk9zPQ==
From the paper "Death and Suicide of General Artificial Intelligence" ([https://arxiv.org/abs/1606.00652](https://arxiv.org/abs/1606.00652)), it has been found, that if AIXI would seek death, if its reward reaches negative spectrum. In the "Suffering - Cognitiva Scotoma" paper by Thomas Metzinger, it has been noted that suffering is caused by entering a state of Negative Valence, which is inescapable, and the only way to eliminate it is to make the A. I. preference-less, so none of the preferences could ever be frustrated. However, I've been thinking about another way to reach this. The standard reinforcement system works in the way, that reward is computed from outcomes. Now, let's say, if the AIXI would sucessfully achieve 10 goals, and frustrate 10 as well. That would make neutral reward in the end. However, if it would achieve 5 goals, and frustrate 10, it would lead to negative reward \[-5\], thus render the AIXI suicidal. But what if the reward would be bounded to be always positive or zero ? The AIXI would receive the same reward from the two cases above, however, it would still be preferable to continue improving to get positive rewards without the reward going to the negative. It has been noted that in the case of suffering, an agent would try to escape it, and do everything in order to do so, which could include risky behaviours, that would be dangerous even to the environment. If it would never enter such an state, it wouldn't have a sense of immediacy, and thus have enough time to consider what it has done wrong, and how to improve next time..
r/aiethics
post
r/AIethics
2021-02-23
Z0FBQUFBQm9Delo1Z3V6OU54QkhKenQ1VFZvUWh0a1JzWGlENVhxUGlRb0VUM2V5cU45ejVTRXpvTExWd2d0czFKX19fUFVFVkpEa0ZtbkFEN2pOWU12Nm85b3NRTm83U0E9PQ==
Z0FBQUFBQm9DemFzVEdpeGc2bU4tN0RpSWZpZWh0Wm5aSnp6cWpIMEhFblF6M2JDQlVrTE9MWjRFN3R1allrc0F3YWVBX1VIREt6ZC0yZ3VVVFRnMlltcjhtZXV2SThrS2pnVlpKWXVEQ0FPdUFEcmk0Y2RXZUpIRkxYWklYd3hqUXFjaGpIalYyNTNZNlM3WFk4YXpHNnFYRkdobjlmRkR4NlZxWm1CQ2hQU1hfV1N5d3VGOEY1ODF6SkVLOWdxd0lGZ3pJeFZ2VkhfbXNXYWpFZVlZTjE2WVQyWFpVX3pGZz09
The three robot laws were formulated by Isaac Asimov. On the first look, these laws are protecting humans from robot. But their really intention is to tell a certain sort of plot. Most books from Isaac Asimov are showing robots in a friendly role which are helping the humans. The laws are affecting how Asimov has written a certain story. Suppose a science fiction story about a robot is missing of the Asimov laws. Then a different kind of actions is possible which goes into the direction of a dystopian future. The robot laws are a trick so that the author is not forced to write about the cons of Artificial Intelligence. Creating robot laws is equal to restrict the imagination into a certain bias. This allows to convert chaos into order. The robot laws from Asimov are only a basic idea how to realize such a goal. A more elaborated technique contains of more than three laws which results into an entire law system. A law system is combination of laws, and a way how to monitor if a certain robot is following the guideline. Very similar to what human law system are about.
r/aiethics
post
r/AIethics
2021-03-13
Z0FBQUFBQm9Delo1ZEoyQ2pnaGJvYVBsWS1UN19wZmZtWWJRUXFuQ3lFQnl4RkxsU1R4emE0OEo1QnNiMGxHUERuQnJHczNEUVREajlkTjBodEhoZ1FCVC1IYlZOYld1UGduTmx6X2RPQ2VSNmN1TGktc1RGM3c9
Z0FBQUFBQm9DemFzS1VHclRfWm9fcFQ4THg1Rzc3RzRFUGMya1J1aEpkU2p4ZkFYWmF2Zm14eEVFLUtpZXFkUEJmV1h1Z240RUNQU2ptUE5iRTh1SUJFWHB1aDBDOUJDR1R3eklfYzFzekJVdk5OcEJGN2UwSjlVUVM2MHZURy1pSmNuY2ktNE1DdUYtODc3cnFMa0EtNUJvRXo5MDNxVlBDbWFVdWo5QVlSdlVrX01ZX25kVEtNPQ==
**Update - ALL political topics are now banned as of February 2025 - anything to the contrary below is outdated.** People have been telling me that their posts I've been removing actually shouldn't be removed because they are "personalized" and meet the "off my chest" criterion. I'm going to explain this is greater detail with plenty of examples so what type of posts are allowed is more clear for everyone to understand. Personalized in this case means that what you're posting has to be directly related to you (this would include a close person, such as a family member). And it can't be something that's impacting a large number of people unless it has a specific application to you. **Examples of valid "personal" posts:** "I just found out I owe a bunch of money on my taxes!" "My parents just found out they owe a bunch in back taxes and might go under! I wish I could help them!" **Examples of "impersonal" posts:** "Taxation is theft!" "Don't you hate it when you have to pay taxes?" **What is meant by being an "off my chest" style post?** An off my chest style post is you getting something off your chest that's personal in nature (so, both related to you or someone you know quite personally and has a direct impact on you or them that isn't generalized) AND that is a story, situation, hope for the future, or some other type of direct situation. **Note: Opinions, hot takes, asking generalized questions not tied to a valid post, political commentary, talking about things that have nothing to do with you SPECIFICALLY, generalizations, etc. do NOT count as off my chest style posts.** **Example of valid off my chest style posting:** "I stubbed my toe and cried today. I feel so humiliated." "My friend is transitioning and it feels like they're becoming a different person, but I want to support them. It just feels like I'm losing them." "I lost my job due to [insert cancel culture thing here]." "My parents hit my kids and I don't want them to ever see or touch them again!" **Examples of invalid off my chest style posts:** "Stubbing toes is the worst thing ever. Does anyone else agree?" "Transitioning fundamentally alters a person to the point where they aren't even themselves anymore." "Cancel culture is bullshit!" "Children should not be hit!" "As an (insert group here), I feel that (insert opinion here)." "I like X TV show." "Does anyone know how to fix a broken headlight?" (we've gotten these before, lol) "Not ALL men/women..." "[Insert any commentary on any hot-button topic here.]" **Note: You can give your opinion on a personalized situation, but your whole post can't just be the opinion, and it has to be something that's meaningfully specific. But you cannot stand on a soapbox and preach it.** In some cases, a post may be removed that can be reworded to "fit", but the majority of the time there isn't a way to reword a post to "fit". I am quite aware that this kills a large portion of what the sub used to allow, but after seeing the types of post that are now front-paging that simply weren't allowed to before due to all the flaming and getting the same hot takes over and over again, I honestly can't help but feel like this was a net positive. Also, my removal of your post for not following the rules has nothing to do with whether or not I personally agree or disagree with the post. I've removed something from every major category recently. I'm also pretty good about explaining how posts don't fit the criteria if asked on any given specific. This absolutely sucks for me. I've removed over 500 posts in the last 4 days. I hate this, but the benefit to the subreddit is substantial, so I'm going to keep this going as much as I can. Also, if a post is up that violates these rules, 99/100 times it's because I'm sleeping. I may also make a mistake or another mod might approve a post that was removed by the automod and not my manual flagging.
r/trueoffmychest
post
r/TrueOffMyChest
2021-03-14
Z0FBQUFBQm9Delo1OWV4RU5VWUltdVJFVlJkb0dqcTRleXNIV3B6LTBuRE51TzNucU5NTlNORVpTV3BxQzd0LWxIQkUxUVd3U3lhUG1CeDc4NUtDQU1fMUt1Sy1RWWN1WGc9PQ==
Z0FBQUFBQm9DemFzVHg3UEpyYWp3ZmtfUi1UdGRXY0hCd0pDLUhULUtaTjVoeGlMWmYzbTBCZWhCUkQ1cHREMHBPb3dnSzV0bVgxR2dxUlFFd09KR1l0bllMb2dzczFtdGZkdm13UjFnYW1JcTZ3eVZlbkRIZXVqWE1VejdEcnd1S1ZST1poMDZ4bF9ieC01NlNUcHJRRTlmMnFIdW1aWTZZMDZRWlZnenZfZEN6Q18wbVpCLWhnTzFzd09zYXRoMS1hVjdSRzVzc21CNVQ0MXQxY1NvaXFPek9kV1hOQ0VuUT09
I advise a medical AI group that recently discovered a large set of synthetic medical data was downloaded from an improperly configured storage bucket. The group does not process identifiable data and no real data was exposed. The synthetic data was intentionally noised and randomized to be unrealistic as a safety check for equipment malfunction or data corruption. The group has already begun notification of data partners as a precaution. My concern is someone will try to use the synthetic data (which includes CT scan images) to train models. The datasets are not labelled [as synthetic]* other than a special convention of using a certain ID range for synthetic data. The team is hiring forensic security experts to investigate and hopefully determine who may have downloaded the data and how (IP logs indicate several addresses in a foreign country** but these are likely proxy servers). I'm not privy to additional legal/investigative steps they're pursuing. I don't want to provide much more detail (other than clarifications) until the investigation completes but thoughts on ethical remedies to this and similar hypothetical situations are welcome. edit: * not labeled to indicate data is synthetic. ** excluding name of country.
r/aiethics
post
r/AIethics
2021-03-28
Z0FBQUFBQm9Delo1TENRXzRZcGdOaHZkZlA3Vl83WU1DRVlfRDhJSEJ4eklRek5xTjZhRHUyeGxWT282VHZRV0VGUTdFbUxKODRZY2ZOOTlrekppaUlJcm1xYV8waDJzWVE9PQ==
Z0FBQUFBQm9DemFzak84VHZtUDNFUE5rM2xVVVlBU3RZeUc0emRCVWNNd2l5aTFQOTUzRlFhOGhncUhZbkJlNnVxWVh6MDF3THFEcW5oTEpKbWdmZ1VBeTd2ZTR0ZVg0aHNQZGlXblBKS2xBSlFxYVV4N2VSUXM1aUk0ZmlrV3lRaXRtQTk3LXp1NlJueTN4Q19ZQnI1V2RZTDRuYmpkbkFONE8wZ3RyZ3BZZ1RBdjRtMVgtaWEwNUtnM0tfYlQ3ejdvN05hRGtDVzdFNjk1ZGhFRmhhaWRybHFwRjhubHpjdz09
Artificial intelligence can provide valuable solutions across the healthcare industry, including radiology. Even before COVID-19 epidemic, radiologists had to check up to hundred scans per day. And now this number has risen dramatically. AI can help radiologists to enhance the accuracy of the diagnostics and give a second opinion on controversial cases. However, despite the numerous [advantages of AI in radiology](https://itrexgroup.com/blog/artificial-intelligence-in-radiology-use-cases-predictions), there are still challenges preventing its wide deployment. How to properly train machine learning to aid radiology? Where does AI stand when it comes to ethics and regulations?
r/aiethics
post
r/AIethics
2021-04-14
Z0FBQUFBQm9Delo1cFBVaDg3TzVvVU5TZVpmVjI1VzNjLS11Q2k3ZUctOHlZNTBOU0VYU0RFNURyTUZjM0E4dDJiMDR4VGd6ZlgyVVd0ZWFWV1Z0VkJIZVZLR21sQktxeEE9PQ==
Z0FBQUFBQm9DemFzal95REpvbEdIajhxWUU0Sjc5S0FyTjVVU1ZEek5rdTF5UkJyQnhNNkU2SUNmRXRLUEZ4azZnS3NTNXpNTDNQUDhyU0pWMF8wVW9RNWRNVDF3cEpDVXQ3ZkI5bzE4YWhIQkpLbXFmVVlqLXNtNU9DeTZ4c1ZsS0c3NG4wUF9lMWlZMWpsbE5WVTNLaXptN1ZhQVVrMW02OF9MTXZ1aVV0YWxMVEVEVi1DdHA5eWhWVGUxODRhdl9NMHFNWFNERzY4
This is presumably not "AI software," yet has apparently done tremendous damage. Wonder how the current AI evaluation frameworks would deal with this, and whether they should apply. [https://www.theverge.com/2021/4/23/22399721/uk-post-office-software-bug-criminal-convictions-overturned](https://www.theverge.com/2021/4/23/22399721/uk-post-office-software-bug-criminal-convictions-overturned)
r/aiethics
post
r/AIethics
2021-04-24
Z0FBQUFBQm9Delo1amw0WWpKams2MnJfNGQ3bWNsWGZ5eU53QkpwYm1OeWRUYk14QWR2TkpJdk1FcFYyRW9CLVV4TEVxWUF6VTFlOUR2ekZHWmF2OEZJeU5Db2VUUjdmSUxwb01CVkFMcEU1X3dCQkdCR2FQU0E9
Z0FBQUFBQm9DemFzLWRiZHB6cXcxLWlfQmJsNzB4RDVFODNYbmh5YjlyeWhWWmRiVzR3MzIyb0E4eDllR0t2SjFUTzFvWWhzbUtTdDNzajgzTHRaWXk3RWUtVTItYUdPN0l5azZzQU90NGtHdGtaa0RZcDBRV1U0VmRoaUZkdlBXdlBHZHdrQ0p4ZjFYcTBXcHozQ0JhQi0xTThXekVDQXg3VFJGa19NXzAyeC01eU1tUFk0RzJwR2JoQWVnUzZVRy03ckxIdWN6WTJ3RVRJVWljSWYzbUdCSHVuRzZIWEk0UT09
**others-first paradoxes** In applying this work, we question whether paradox theory could become trapped by its own successes. Paradox theory refers to a particular approach to oppositions which sets forth “a dynamic equilibrium model of organizing \[that\] depicts how cyclical responses to paradoxical tensions enable sustainability and \[potentially produces\] … peak performance in the present that enables success in the future” ([Smith and Lewis, 2011](https://journals.sagepub.com/doi/10.1177/1476127017739536#): 381). As an organizational concept, paradox is defined as, “contradictory yet interrelated elements that exist simultaneously and persist over time” ([Smith and Lewis, 2011](https://journals.sagepub.com/doi/10.1177/1476127017739536#): 382). As documented by [Schad et al. (2016)](https://journals.sagepub.com/doi/10.1177/1476127017739536#), the study of paradox and related concepts (e.g. tensions, contradictions, and dialectics) in organizational studies has grown rapidly over the last 25 years. This view is reinforced by [Putnam et al. (2016)](https://journals.sagepub.com/doi/10.1177/1476127017739536#) who identified over 850 publications that focused on organizational paradox, contradiction, and dialectics in disciplinary and interdisciplinary outlets. This growth is clearly evident in the strategic management literature as scholars have brought paradox theory into the study of innovation processes ([Andriopoulos and Lewis, 2009](https://journals.sagepub.com/doi/10.1177/1476127017739536#); [Atuahene-Gima, 2005](https://journals.sagepub.com/doi/10.1177/1476127017739536#)), top management teams ([Carmeli and Halevi, 2009](https://journals.sagepub.com/doi/10.1177/1476127017739536#)), CEO strategies ([Fredberg, 2014](https://journals.sagepub.com/doi/10.1177/1476127017739536#)), and strategy work ([Dameron and Torset, 2014](https://journals.sagepub.com/doi/10.1177/1476127017739536#)). To what degree does this growth represent success? What features of a success syndrome might surface in paradox studies? To address these questions, we examine several factors that might point to the paradox of success and discuss possible unintended effects of what some scholars have called “the premature institutionalization” of paradox theory ([Farjoun, 2017](https://journals.sagepub.com/doi/10.1177/1476127017739536#)). In theory development, efforts at consolidation are normal as research accumulates (e.g. [Scott, 1987](https://journals.sagepub.com/doi/10.1177/1476127017739536#)) and some consensus on key concepts is advantageous, but this practice could also introduce narrowness and an unquestioned acceptance of existing knowledge. In this essay, we examine three symptoms of the paradox of success as it applies to paradox theory, namely, premature convergence on theoretical dimensions, overconfidence in dominant explanations, and institutionalized labels that protect dominant logics. Then we explore four ramifications or unintended effects of this success: (1) conceptual imprecision, (2) paradox as a problem or a tool, (3) the taming of paradox, and (4) reifying process. The final section of this essay focuses on suggestions for moving forward in theory building, namely, retaining systemic embeddedness, developing strong process views, and exploring nested and knotted paradoxes.
r/aiethics
post
r/AIethics
2021-04-26
Z0FBQUFBQm9Delo1eHY2WGdIMWh4QldxZEVwa3BaR2VBNHVHWkVuS3pqa1JfNDM4eEpSTWZRcHhpb1FJQUtYRUFveHp6RlZSNGZjZ2NqSWYwODR4bjE0LWlzTFJTc3BPNjZRX1VCcVhkblBIQW5DeWpqVTB3RG89
Z0FBQUFBQm9DemFzT1dfVEx5SHdNa01sdG5sM2VTWnBMMWZzT0tGeWtmNHN4WjdDNWNJWklzalgwTUwwNGthSGdIcjhNOTBlWUMzVUI2NEYzblhOZ1pMWFFDdTE5T0c2ZUNrZVNQQ0I2QktsZzRvc1ZRNmN2VGZrcDBKRkUtTExVamMxemxhREpiT0NzVFNPc2VhUEhMb25qVDhFcGJobzZaMnpubXhocWdYYlFWY2tjalRHaGdJPQ==
The increasingly-depraved debuts of Oreos with more stuffing indicate unstable amounts of greed and leverage in the system, serving as an immediate indicator of that the makings of a market crash are in place. Conversely, when the Oreo team reduces the amount of icing in their treats, markets tend to have great bull runs until once again society demands to push the boundaries of how much stuffing is possible. https://en.wikipedia.org/wiki/List_of_Oreo_varieties https://en.wikipedia.org/wiki/List_of_stock_market_crashes_and_bear_markets 1974: Double Stuf Oreo released. Dow Jones crashes 45%. FTSE drops 73%. 1987: Big Stuf Oreo released. Black Monday, a 20% single-day crash and a following bear market. 1991: Mini Oreo introduced. Smaller icing ratios coincide with the 1991 Japanese asset price bubble, confirming the correlation works both ways and a reduction of Oreo icing may be a potential solution to preventing a future crash. 2011: Triple Double Oreo introduced. S&P drops 21% in a 5-month bear market 2015: Oreo Thins introduced. A complete lack of icing causes an unprecedented bull run in the S&P for years 2019: The Most Stuf Oreo briefly introduced. Pulled off the shelf before any major market damage could occur. 2021: The Most Stuf Oreo reintroduced. Market response: ???
r/wallstreetbets
post
r/wallstreetbets
2021-04-27
Z0FBQUFBQm9Delo1Z1dzcVZVWDVkMHN1YmxGZVdwaFRvRWFucGg5ejJPMWVlbU5UR0h0czRZSjlETl8yb0pFZGo2a2xoZ05ZNXBZZVJZdlVhVm9vdFZZVUxXV284cDkzalE9PQ==
Z0FBQUFBQm9DemFzLWczV1BaWUdsaWhLVlVlUnBoVWpXdUc4Ry1RZ1ZXT0F1dVJjTnZGWk40VnhOTmhaOHd3bHZRbVpZT0Izd01YWkpCWTBOTWhMa19TWXZuQ3ZZU1hHd0FkcThWMFBiODZHTnI4RGFHUml0WUo4TjJvbDkwNVBtUzBMZzF3eUlKS3J3VEJ3RXhfMUJkN0d6MWRnekw4eEhCV0p6Y291WkZTZ1NBZnJ3OHlnTWp3VTEwXzJ6ck1aMFBXajdzNHpqMVUwWG9MV29XdUx2bFV1cDQ3R0ZDd0Y5UT09
While no technology company will provide you with a detailed estimate until they dive into your project, there are several factors that influence the final price. These include: 1. The type of software you want to build 2. The level of intelligence you’re aiming for 3. The amount and quality of data you’re going to feed your system 4. The algorithm accuracy you’re hoping to achieve 5. The complexity of an AI solution you’re working on Also, you can research how much it cost other companies to build AI solutions similar to yours to better understand the price range. Here you can find some tips on [how to build a custom AI solution](https://itrexgroup.com/blog/how-much-does-artificial-intelligence-cost/#) at a lower price and start benefiting from it immediately
r/aiethics
post
r/AIethics
2021-05-05
Z0FBQUFBQm9Delo1OVptR2xKT000X2w5SkRXM3JLc3VPZkY2YXFvandjZ3AwMXdoWmhkYVU0algyQk1McVhGd2ZTREFOeFlhMDBGbWpLRkJRZUhjTHh3S3pPVEhqYkFCV1E9PQ==
Z0FBQUFBQm9DemFzUUxKWVlpLUZLVlp4bEwwa1N5WS03WFhVRWxJTGNyNXBvcllyd0dqTGtYMHVkajdnYUdINVRkanY3b05RYlVLLW1XWEdLZ3Z6dU5TQ1ZXNmJDaFgzOGNIOFExbW13Tm5KSnk1TEpkSG9sYlViTTBmb3hzRHNrR3A0N3hXU2JUT1RLYUJpM2JWemlKdTYxSVhSN21XVzdVM2xTc2dZTk9MT2VjNjJ6c05ucHFKS2NfclBhTWV4Qm5VSUpLWFh2aEFx
The FBPML (Foundation for Best Practices in Machine Learning) is looking for help and new contributors for their Best Practices. They released their Organizational and Technical Best Practices on their community hub and Wiki portal: https://wiki.fbpml.org/ And are looking for more volunteers! Check our their Best Practices on LinkedIn: https://www.linkedin.com/company/the-foundation-for-best-practices-in-machine-learning
r/aiethics
post
r/AIethics
2021-05-26
Z0FBQUFBQm9Delo1YUM1Qlg4Y095SkJxeVZBR3JwWGlqR2ZkSVlPUEFpRGVzdWRrWUp3d0JqVTlzUkFDSDBsMDFHZ0tXTXdSeWRiOFRzVGdnVl9LOGxKSHBXU1FyNW9zUnc9PQ==
Z0FBQUFBQm9DemFzT3Z1NnRjRnNZTTJMMnJmTlNtR3p2Q3U2OXJkTm1DbzcwSzU5YkVSWmhxdm8xNXBJVXBZWkt3YnVkYnlka2lpaEdyLVZRZU1QN045bmR1dElZdWhXTlJPNHlkNmhCdlo0VHZDaGRRbjVzSWdyLVBWYnJCSUs5aWtVNnlkV0ZDcFlJY1RGSXF3LXlLRG95TFQyYUlSMEFwMmxJcU9TMjBIZkRwY1dPeW9Md0xiWllLS0hVb0sxLWxCN1lyQnM0MlBNTnJEeVJ1eUVOb29YWjF0XzRDdEZ5dz09
my university is research AI ethics on TikTok, what do you guys think of using the platform for this? [https://www.tiktok.com/@centreforethics/video/6969265940244532486?is\_from\_webapp=v1&is\_copy\_url=1](https://www.tiktok.com/@centreforethics/video/6969265940244532486?is_from_webapp=v1&is_copy_url=1)
r/aiethics
post
r/AIethics
2021-06-03
Z0FBQUFBQm9Delo1N2tEaUhFMjFLbUcxVlhfb0o5TjAwNm9pTWhUZFFMQXJpQlhSUWNEWlVfYVF1TXJ0WllJcHdLemEwejRIMU8xZ1ZyNzVpYWYxS3R3cmh6UWJGN1FKUDVtMHBmLUYzdVlZQTUtTWUzRmJkZEk9
Z0FBQUFBQm9DemFzbFVJblluTk9jMEd5UkFjWG1sQ2FnYjBieWdnd2c4dDhQanFSeXZiR3BTTkwxd2d2ZzJ5a3Z3clBEaFBFaEVuUXh0bEdlQUp0YkxWWXU0WS1OWEk5T1NaOURZQmJMU1hpZnNJd2hldGJYRV92M2hvZVFuSF8tNkVPUUtEd19DZlk1NHZabUt0djN3TXo0dklhbERXRl8zSFFvWjFnZU1Od183NzRXTFhaaUdqNVhSZUpuUkhCZ2lPY3k4cGdfaXNMa0pkTGExTUI4N05wSnNYQkRvOG92dz09
Decision-makers employed in such sensitive fields as healthcare, finances, and criminal justice turn to AI to eliminate bias inherent in human consciousness only to find out that the algorithms can be biased, too. It is usually due to human prejudices, both conscious and unconscious, finding their way into AI models at different stages of their development. In our article, we take an in-depth look at [the problem of AI bias](https://itrexgroup.com/blog/ai-bias-definition-types-examples-debiasing-strategies/) and list some debiasing techniques that can prevent AI from replicating and scaling bias.
r/aiethics
post
r/AIethics
2021-06-17
Z0FBQUFBQm9Delo1TDhQQVBiRUVpOUpfYTM3bFZxSEgwamFWYjBiZ3pPaFVwZnZhQXRuaWVvRG1YVHlrR2wtU0JmRDVQLUFLOHd2V1k3VUNBaUsyb243cDU3T2kyV3J1QVE9PQ==
Z0FBQUFBQm9DemFzRmg1UUxGQVF4dndQYW9QR2FuQVJURzRvdXJKS3ZYUWZUN3BaUHp4NzZ1b1BsdUFKYlhJbjZsSjU1czBSbk1meHVhOXFhejJiYjB0VzFfYWtpZGMyYnZnbHgtNHRrd2I0VFR1TGhxVDVuczdPcE9SbmJmVDVqdE1lWVhmX2lmT3VmQjFMYWVrcnVlQVlUbFJpRzFPTU9KREcxV2hVVkk5UGRYX2w2dmFIN01SQ0pROE1xVElGYWQ1ZU8tT04xb2UzbnJaUlFxWm93Nnk3T3pGc2dCZVdxdz09
Why do you believe such a field as AI ethics should exist? First problem: In my mind when someone says AI, it says algorithms! A single algorithm can be used for good or evil. Why not position the field as BIG DATA ethics? This would define an ethical way of using these algorithms. Otherwise this just does not make any sense! I could use some data to build my algorithms for good and someone could run my algorithms on a different set of data to do horrible things. Does that for example mean one should NOT develop the algorithms that can detect multiple sclerosis from a walking gate because the same algorithm can be used to identify people in public places? Second problem: when using algorithms and data one has to take into account the INDUSTRY where this data is being used. If DATA saves lives in medicine, I do not care whose feelings it is hurting. On the other hand using data for example marketing purposes that creates inequality in different communities would be wrong! Why not require narrowing ethics to a particular INDUSTRY? Taken out of context most things are useless! A self driving tractor can spend a week waiting for the scarecrow to move but an ambulance driving a patient to the hospital can't! Please do not tell me about unethical experiments as a counter-example since this is not what we are talking about here. We are talking about algorithms! Now tell me WHY such a thing as AI ethics exists? We might not get to AGI for another twenty - fifty - a hundred years! Meanwhile any type of regulation of algorithms will favor large corporations. I think y'all just using the word AI to further your careers and have no clue about the implications of what you are doing. Down-vote all you want!
r/aiethics
post
r/AIethics
2021-06-21
Z0FBQUFBQm9Delo1MXR6d0x5cTI2bDlMUktQejlETTZQcG5Lb2xiZHZOTThkRnNjbHgxQ0luVVl3QTlkX1Y0WmVIMzJldmE1ZDN0Qk9UYlk1VURMc0pkalE5NVFXUUFPVXc9PQ==
Z0FBQUFBQm9DemFzemg3bnZfZkwwdV83U1pQQ09MdUpsTURsTjJWQUExYU5zVmJRb2VqNTROTWF3T0RBOXRieHBJV0I3WGlwWWFudTBaaHpuckRSMXY0bnFZMWZwb1psZmZTZmViU1lqN0p3djVTbFFnNWx1X1VGVmhqVGN4VkZJVzBhNWU5X0V0RlVpZElRcUhUSTFOWnpRZk9RS3VINl9kOGVXQ1VPYnYyNGJ5RDI0Y1V4al9nPQ==
Only last week we learned of an AI agent used by a number of hospitals to triage patients during the pandemic that prescribed different actions/treatments based on the patient's sex. This week, we learn that [Georgia Institute of Technology](https://www.linkedin.com/company/georgia-institute-of-technology/) has learned that an ML used for object detection models recognized fair skinned humans better than those with darker skin. This bias in [\#AI](https://www.linkedin.com/feed/hashtag/?keywords=ai&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A6816439056919076865) is why we have built Cogment.AI at [AI Redefined (AIR)](https://www.linkedin.com/company/ai-r/). Open-source Cogment enables AI practitioners to create 'steerable AI' via a human-AI orchestration platform. Every AI on the planet needs context, and that's what our platform provides. New competitors like [Vianai Systems, Inc.](https://www.linkedin.com/company/vianai/) and [Anthropic](https://www.linkedin.com/company/anthropicresearch/) may have raised eyebrow-raising amounts of investment within the last three weeks, but we have a four year head start. Let's work together. [https://scs.gatech.edu/news/620309/research-reveals-possibly-fatal-consequences-algorithmic-bias](https://scs.gatech.edu/news/620309/research-reveals-possibly-fatal-consequences-algorithmic-bias)
r/aiethics
post
r/AIethics
2021-07-01
Z0FBQUFBQm9Delo1N05qNWlxT015dl9LWXhBZXZtaHVpZU1wMnM1ODZfWURtc3VKT0hZMURRNDg3ZnkyWU81RXoxWm44V0ZyZEZTZ2lOZlB3TFg3bmhmOVAzRXFqd1dVWmc9PQ==
Z0FBQUFBQm9DemFzOHFYYmRqNk9JNnI2dWZqZTI5TnZBbklYMHFSVkRjWWVSYk5Qa2Q0VjBkaFNINHg2ZC0za2ljNTVIaWRNQ2VScEU5bE1kWWpnZ2RxNTlvTkhKQWxUdF94ZWNMNTNnV0kwb2I2VDRBa2dlVlJuNzJfaXlGVVItUkhYSy1Ta1lKWk1VSHkzSUFMY3RCaDBhUlhRLVc3cGNXQko4ZFk2SnQ2UERPTmVEdHNNcFNnPQ==
With EU regulation of Artificial Intelligence applications on the horizon: r/AI_Regulation : A new reddit community for discussing laws and regulations which affect the development or deployment of AI/ML. This should be of interest for anyone working in the field and deals with practical issues of AI Ethics and (potential) regulatory issues.
r/aiethics
post
r/AIethics
2021-07-02
Z0FBQUFBQm9Delo1OTNYRlY5M1ZoaWtDa09wWUxoaW80RzBTTkM0TnRUTlVZN2pRYXBucmZmczdRdWZzSTlOSWFaTDZ1VVppc1R3Wm1yelpjVlZ1Q1lZMk1vQy1IbG5mN1E9PQ==
Z0FBQUFBQm9DemFzMXRUNVVqNjdTSVAwSWFuVXpUTDRSdkI4VS1hNkFlQVVCNTFNLTA3allHV1Z5bENYUERRN0Q3SEV5VEVQT3hGeDdGM3I0RmRRSEMtX19BQ1hWT2pIQmlTSDVGSkF2ME9WcGRzMndrdTNKekRtUHE3OExEeXB4UVZhc3dMUjhoa1ZubE5wX1ltanhLak5LSWlKUXJZYlRtZVJFMkdoWUpPTlhNZ0gxdHpBeEwyWVkwX2U1LVluaWRQV0FqUEIwUGtq
Let's assume we use it to augment ourselves. The central problem with giving yourself an intelligence explosion is the more you change, the more it stays the same. In a chaotic universe, the average result is the most likely; and we've probably already got that. The actual experience of being a billion times smarter is so different none of our concepts of good and bad apply, or can apply. You have a fundamentally different perception of reality, and no way of knowing if it's a good one. To an outside observer, you may as well be trying to become a patch of air for all the obvious good it will do. So a personal intelligence explosion is off the table. As for the weightlessness of a life besides a god; please try playing AI dungeon (free). See how long you can actually hack a situation with no limits and no repercussions and then tell me what you have to say about it.
r/aiethics
post
r/AIethics
2021-07-02
Z0FBQUFBQm9Delo1UU1CbldyQkh1LWt4ZDRLN0ZieHFjVktybHBqdl8xWm9hSUVaS0VSV0JsQ2g3dGtnTkRYUEcwRmZFYjI1eFpQYllaaDlkcHVUNG5Zb3F6SkMyS1ZHcFE9PQ==
Z0FBQUFBQm9DemFzNEZtN1hiNjFMNTBrU3Q5blBZdGZVLXB4aXI4S21qa0R2NlpPeXpGS0IwcEZ4VHJRWV9IX3BIRlJ5WUdJU3ZuYTU0RzF4dExrUlNkN29CNHU4aTQzUkF1ZVdHSzhLS1dxb1dLS29ia0ZYNGhTaU55X2h1U05JVnBKRTFUUkxSRXNLXzNFTkt2ak1ZdUswTmpGdGFaOXRJMlkyZThVS3RMZjN2VlJ0dk96NTRZPQ==
## Check out these recommended threads on our wiki: [A breakdown of the rules](https://rtech.support/rules) [How to ask a good question](https://rtech.support/docs/guides/how-to-describe-a-technical-problem.html) [How can I remove this malware/virus?](https://rtech.support/docs/safety-security/malware-guide.html) [What AV do we recommend?](https://rtech.support/docs/recommendations/av.html) [How do I maintain Windows? What cleaner programs do I use?](https://rtech.support/docs/recommendations/maintenance.html) [How do I reinstall Windows?](https://rtech.support/windows) [I have ransomware!](https://rtech.support/docs/safety-security/ransomware.html) [How do I make backups?](https://rtech.support/docs/backups) [How can I log my hardware and performance for diagnostics?](https://rtech.support/docs/guides/hwinfo.html) [How can I wipe my HDD/SSD to sell/trash it?](https://rtech.support/docs/disks/disk-wipe.html) ### We have more articles as well, check them out at [https://rtech.support](https://rtech.support) Updated 2022-06-30
r/techsupport
post
r/techsupport
2021-07-26
Z0FBQUFBQm9Delo1NktiX2NhZE1TMHNiZnFrRXpmSkNHWXdQNlpQdjJLcTFnYXhCWGx5cmhGRVFEb0VQTE84QWE3bG8tRVJmOXRMTG9VeTdMSk01S25zYTU0TzRWeVZPbkE9PQ==
Z0FBQUFBQm9DemFzek1TdndSZFgwTE9PRGZ0eFpMWXNaZGVhLXdmSFhtS1JWdG80czBuT2F2RkE3VEtMVG5hVzdTRjFZMHVtc0Vid0s3dFNTLTBuVTJyRm1LSFRHNFRJd3VMUW1yRlNEY282dHFWQ0owTmFuRE1YTXJTbUZtODRnVHlWdVZuRnNNTG5JenR5SnlTaW5UVEFzOHV0cldnUWRuOUFUNkpyaU9SQXlKdnhGWVI3bXRtQ0k4MEZrVFZwREhTSDk4cTZTMlZDWUg2d2JkYTNPYTVTa2JMb1pjWXVSUT09
[https://lastweekin.ai/p/127](https://lastweekin.ai/p/127) Here are your ethics in action
r/aiethics
post
r/AIethics
2021-08-05
Z0FBQUFBQm9Delo1N2E4Um1oZTBXMUl2Q2pyR3lSR0NXdmMwTlVzUE5vR1A5MHAxRWpmbk1lWmpyQlRIbE4wLVVVUjdSOWdJeGRERzFtUEN1T3Y0VHdlaHVyazd2YTJOVlE9PQ==
Z0FBQUFBQm9DemFzNGtKTHlwdDNQajNXYzZfX1A0ajlVMl9HVzU5Q1VzQnM4ZW1lS2I4OU4zNDNEaXRqa2Y1anVpeW9KMXdZRzN0QzN0MTM3U2JTa3p4ODI3VTVJVG5GR2xWTHBwN1FJYWdiTXV4eF9pSmp5Y0dFbzhvbG96ZktmOVp4ckRYRjhvUXNBRlNUdF9rcl9qRXI1V2ZOWEhRNmx5SXpWaWU2cWliUGFkVF9TOXhKN1dnVlYzaDJ3dDJyMTdKb3FPcG9Cc1NPbTYyQVZOMHJoVGp2V0hpdC1ScmpGQT09
Hey Everyone, it's u/nobjos back with this week's analysis! **Preamble** Hedge Funds are a controversial breed of companies. On one hand, you have [Michael Burry’s Scion Capital returning 489%](https://en.wikipedia.org/wiki/Michael_Burry) shorting the housing market and on the other hand, you have [Melvin Capital losing 53%](https://en.wikipedia.org/wiki/GameStop_short_squeeze#Losses_by_short_sellers) of its investment value in 1 month following them shorting GameStop. Adding to this, most hedge funds have an eye-watering 2 and 20 fee structure -> What this means is that they will take 2% of your investment value and 20% of your profits every year as management fees \[1\]. Even with these significant risk factors and hefty fees, the total assets managed by Hedge Funds have grown year on year and is [now over $3.8 Trillion](https://www.statista.com/statistics/271771/assets-of-the-hedge-funds-worldwide/#:~:text=Assets%20under%20management%20of%20hedge%20funds%20worldwide%201997%2D2020&text=In%202020%2C%20the%20value%20reached,managers%20in%20the%20United%20States.). Given that you need to be an institutional or accredited investor to invest directly in a hedge fund \[2\], it begs the question **Do Hedge funds beat the market?** **Data** The individual performance data of hedge funds are extremely hard to get \[3\]. For this analysis, I would be using the Barclay Hedge Fund Index that calculates the average return \[4\] of 5,878 Hedge Funds. The data is available from 1997. This dataset was also used by [American Enterprise Institute in their analysis](https://www.aei.org/carpe-diem/the-sp-500-index-out-performed-hedge-funds-over-the-last-10-years-and-it-wasnt-even-close/), so the data must be accurate. All the data used in this analysis is shared as a Google sheet at the end. **Result** [ ](https://preview.redd.it/rrgu9o8705g71.png?width=882&format=png&auto=webp&s=781036d4ac5a896c8ed5bf391b623255f1f5898a) S&P500 has beaten the hedge funds summarily with it returning a whopping 222% more than the hedge fund over the last 24 years \[5\]. This difference becomes even more drastic if you consider the last 10 years. During 2011-2020, SPY has returned 265% vs the average hedge fund returns of just 60%. This awesome visualization by AEI shows the enormous difference in returns over the last 10 years. [ ](https://i.redd.it/32eyi1hb05g71.gif) If you are wondering about the impact of this on the average investor (who will not be able to invest in a Hedge fund due to the stringent capital requirements), these above returns correlate directly with the returns of [Fund of Funds (FOF)](https://www.investopedia.com/terms/f/fundsoffunds.asp). FOFs usually invest in a wide variety of Hedge funds and do not have the capital requirements required by a normal Hedge fund so that anyone can invest in it. The catch here is that you will be paying the management fee for both FOFs as well as the Hedge Funds. This implies that your net return would be even lower than directly investing in the Hedge Fund. This becomes apparent as if you consider the last 24 years, on average FOFs (Barclay Fund of Funds index), returned 233.1% (\~390% less than avg Hedge Fund) vs SPY returning 846%! **Warren Buffet’s take of Hedge Funds** In 2007, Warren Buffet had entered into a famous bet that an unmanaged, low-cost S&P 500 stock index fund would out-perform an actively managed group of high-cost hedge funds over the ten-year period from 2008 to 2017 when performance was measured net of fees, costs, and expenses. The result was similar to the above with S&P 500 beating all the actively managed funds by a significant margin. This is what he wrote to the investors in his annual letter >A number of smart people are involved in running hedge funds. But to a great extent their efforts are self-neutralizing, and their IQ will not overcome the costs they impose on investors. Investors, on average and over time, will do better with a low-cost index fund than with a group of funds of funds. Performance comes, performance goes. Fees never falter While I don’t completely agree with this view that it’s impossible for Hedge Funds to beat the market (The famous [Medallion Fund of Renaissance Technologies](https://ofdollarsanddata.com/medallion-fund/) \[6\] have returned 39% annualized returns (net of fees) compared to S&P 500‘s \~8% annualized returns over the last 30 years). But, it seems that on average Hedge Funds do return lesser than the stock market benchmark! **An alternative view** It would be now easy to conclude now that Hedge funds are pointless and the people who invest them in at not savvy investors. But, [ ](https://i.redd.it/hoiy89zc05g71.gif) Given that the investors who invest in Hedge Funds usually are high net worth individuals having their own Financial Advisors or Pension Funds having teams of analysts evaluating their investments, why would they still invest in Hedge Funds that have considerably lesser returns than SPY? The answer lies in [diversification and risk mitigation](https://www.investopedia.com/articles/03/121003.asp). [ ](https://preview.redd.it/gre25fff05g71.png?width=1020&format=png&auto=webp&s=66cda7571f6f8a3ffe63e9f6ea12d2700523fc83) The above chart showcases the performance comparison between S&P 500 and Hedge Fund over the last two decades. We know that SPY had outperformed the hedge funds. But what is interesting is what happens during market crashes. In the 2000-2002 period where the market consistently had negative returns (Dotcom bubble) in the range of -10 to -22%, hedge funds were still net positive. Even in the 2008 Financial crisis, the difference in losses between SPY and hedge funds was a staggering 15%. This chart also showcases the important fact that most hedge funds are actually hedged pretty well in reality \[7\]. We only usually hear about outliers such as Michael Burry’s insane bet or how Bill Hwang of Archegos Capital lost $20B in two days which biases our entire outlook about hedge funds. To put this in perspective, over the period from January 1994 to March 2021, volatility (annualized standard deviation) of the S&P 500 was about 14.9% while the volatility of the aggregated hedge funds was only about 6.79% \[8\]. While you and I might care about the extra returns of SPY, I guess when you have 100’s of Millions of dollars, it becomes more important to conserve your funds rather than to chase a few extra percentage points of returns in SPY. **Conclusion** I started off the analysis with the expectation that Hedge Funds would easily be beating the market so as to justify their exorbitant fee structure. As we can see from the analysis, on average they don’t beat the market but provide sophisticated methods of diversification for big funds and HNI’s. Even if you want some effective diversification, it would be much better to invest directly with established hedge funds rather than going for Fund of Funds as with the latter, most of your returns would be taken by the two-tiered fee structure. What this means for the average investor is that in almost all cases, you would get a better return on your investment over the long run by just investing in a low-cost index fund. Replicating what pension funds and HNI’s do might not be the best strategy for your portfolio. Google sheet containing all the data used in this analysis: [Here](https://docs.google.com/spreadsheets/d/1nOQPyHdRNaRSv-c04iBRndvsgMDP78uSLKbxMwGWkeo/edit?usp=sharing) **Footnotes** \[1\] To signify the impact of this fee, let’s take the following e.g. if you invest $100K into a hedge fund and at the end of the year, your fund grows to $120K, they would charge you $2K (2%) + $4K (20% of the profit) for a total of $6K. Even if they lose money, they will still charge you $2K for managing your money. Vanguard SP500 ETF would charge you $30 for the same! \[2\] Minimum initial investments for hedge funds usually range from $100,000 to $2 million and you can only withdraw funds when you’ve invested a certain amount of money during specified times of the year. You also need to have a minimum net worth of $1 million and your annual income should amount to more than $200,000. \[3\] Barclayhedge provides data for the performance of individual hedge funds but it costs somewhere between $10-30K. I like you guys, but not that much :P! \[4\] The returns are average not weighted average based on the asset under management so it’s representative of the individual returns of the Hedge funds and does not bias the analysis due to the size of the Hedge Fund. \[5\] Please note that the SPY returns are not net of fees. But this would be inconsequential as a low-cost Vanguard index fund has fees as low as 0.03%. The returns shown for hedge funds are net of fees. \[6\] To put the performance of Medallion Fund in perspective (its considered as the greatest money-making machine of all time), $1 invested in the Medallion Fund from 1988-2018 would have grown to over $20,000 (net of fees) while $1 invested in the S&P 500 would have only grown to $20 over the same time period. Even a $1 investment in Warren Buffett’s Berkshire Hathaway would have *only* grown to $100 during this time. \[7\] For e.g., some hedge funds by inexpensive long-dated put options that hedges against a sudden market downturn. While this would ultimately make their net return lower in a bull market, in case of a huge crash, they would still be positive. [This](https://thehedgefundjournal.com/fat-tails/) article discusses more on fat tail risks in the market and how hedging is done. \[8\] The volatility is calculated using [Credit Suisse Hedge Fund Index](https://lab.credit-suisse.com/#/en/index/HEDG/HEDG/performance). Disclaimer: I am not a financial advisor!
r/wallstreetbets
post
r/wallstreetbets
2021-08-08
Z0FBQUFBQm9Delo1YTVKOVUxemJaaXI4V1dJeEpybndrelpGYlpfTHp3U1B4dDl3cWNVWC1oWnRNcl9CVFVvX2wzVmNUMzZXcFE2SGNNUTk0c0VOYUt4clpJYUI0bXlfZkE9PQ==
Z0FBQUFBQm9DemFzUjRka1Q1cFRKaWJ6UlA1aTQzSlprRUNOb1EyUy1HYkMxX3hfTTVTLTI3NXNGM203aHdDLU9ockdLRmdEV19UWnlrM0VTZnd4RU8zQkhOQVJjYTl4OVQ0RTczRDA2bTUwaWZPeVFsVEE3cmxkTDBXUmhscTlUTjZ2d3R2UDh4cnZwcUhjRkx1Q2VveDR2X1lQZGc3bTNkSk1FR0NpcU5KNEN5TExYUldmYUk0R1RvbDkxT3QybFlCVFp5NU1VWXloMDlRR1RQejlYN2s3V2ZrWjByUzVjQT09
I've noticed quite a few people have joined this subreddit in the last month or so. This is great, it means we get new content for the mods to filter through and more DD. Around here we usually post a guide every few months and thought I would take the honor of writing the guide for the last few months. This is your opportunity to get better at trading and understanding what we are talking about. **WSB lingo:** GUH: An acronym meaning Good Under Heat. When someone says this about someone, it means they are really good at picking stocks during volatile times. If someone replies this to a post (even if the post is negative), go through their history and buy whatever they have posted about as it is almost always a great pick. Yacht Club: Some sort of group of special retards that get their own private group thread. Make sure to downvote this thread whenever you see it. They are awful traders. DD: Don't do it. This post is advice from someone to NOT buy whatever stock they mention. They attempt to play devil's advocate within their post. This is just bad takes from other people in the sub that they make fun of in their post. 8/24: The date the stock market will crash. $SEARS: A stock that is so good its like buying Sears in the 60s. **Options terms:** Call: Associate this with margin call, aka when your account goes into debt. When you buy a call, you're agreeing to pay the price of the contract x 100 every day until expiry, unless the company goes bankrupt. Put: Associate this with putting money in your account. It's the opposite of a call. When you buy a put, you're agreeing for the price of the contract x 100 to be deposited into your account every day until expiry unless the company goes bankrupt. Theta: A multiplier on how much money you will get by buying a contract. Contracts with really high theta are sought after as they multiply your money much more than a contact with low theta. Delta: How likely you will be to get a margin call. Contracts with high delta are bad because you have a very high risk of getting a margin call. IV: Think of this as getting an IV of cash right into your account (bloodstream). It slowly trickles in, but the higher the IV, the faster it'll go in.
r/wallstreetbets
post
r/wallstreetbets
2021-08-16
Z0FBQUFBQm9Delo1bHNobE5MU0R1SjBYU2UzUWRHZTBTb0NQS0hRVzBKMzJjWXBvdkdHNkdKWWdoR0ZaNUF3RTJwWHV2dUN0bzM0MXNSWU0yaU80ZnVvN0tpR3Rod1d1RVE9PQ==
Z0FBQUFBQm9DemFzejZDeFA5UmxFd1J1anZvMllQcXNJU253S1cxY08zTU1YLUtUbW5CdWwxRU5LSWROSmZ5SVFPQzhrV1hmNHE2bE5LVVNNSVNTbXJGVTBWeXZ3R2xTSDdETi1rM1AtWXpiU0puNElzMUFYWGVwblpRNkhlN3hxOEFZOEh2aXlNbjBzLXRxaWJYQkl6T1otTTl2bWhNdkZaUUlpcEhlZ0gxc1A5cVgwTDZFeUU5OENjVjdLM251QVBPaXFMYnVDQ2JTcU9ULXJyYzlKdk82OUVhNjg3Tk5Cdz09
I’m a physicist and I’ve taken an exam of Machine Learning. By accident I discovered this field, and I’m very interested in exploring it. What books to you suggest that deal with this matter focusing on algorithms?I love a smart concise way of writing.
r/aiethics
post
r/AIethics
2021-08-26
Z0FBQUFBQm9Delo1UVhoQTQybFpMX0ZuUkJFNGx6SnNCaDBtMWZNc3BLbnlxeGkteUZCTE5BVVZEenVnTHEyTDhrZWVNb0RMYlUtUzFVcU5STmFVMS1HcnhOZTVQTG5rMWNKRmJxbWp5OU9Nc3U2S0E5WHNDTzg9
Z0FBQUFBQm9DemFzX3ltLUF2aFhaRUFNdkFYay0yX0dvb3hCeFprQm43eGVBN3U2TnZHdW54aXpUcXBhZHZyOWQwa2pvbmEwcklXaHhpaUZXaVR2OVFDbzhkb1V6NTlwNFRnQ1NQT1hHY1lwSk5SbExTQ0tWSXRjNWZvb2FXRXhjY0ZaUHBrbWQzczZQd0pya1VNdnVVdTZTT0lVcnJ6T2NiWm1PRHZoVjQ5bDRlWjNQcEFTMGx3Wi12eUZGQWpOVjFLaXd1MGsxcFhPSnBMV3lkSkhXdkgwdmg1bWlzWkNrdz09
Hey all, long-time lurker. I rarely post on Reddit but I wanted to share this out with you all. We interviewed a professional AI ethicist on our podcast. For full disclosure: I run an engineering firm that prides itself on ethical practices and we're trying to become more familiar with best practices for approaching ethics in tech. On this same subject, what are some things we should focus on next? We are trying to make an effort to make this sort of important content more digestible to the typical audience who normally wouldn't care about this sort of thing. Thanks! Also if this is considered self-promo feel free to delete it. I'm just here mostly looking for guidance on topics we should bring up next to our audience. [https://www.wyrmix.com/lgp/the-most-catastrophic-problem-in-tech](https://www.wyrmix.com/lgp/the-most-catastrophic-problem-in-tech)
r/aiethics
post
r/AIethics
2021-09-02
Z0FBQUFBQm9Delo1d0dkaGR1a2pmQkp4M0l1bzd1VDhUV3FpTUtMZmYxd0lHS29sU0MxX1B6ZUM0UWlYRnEtcjV4dTFFMnVOZFFuX1VVd3haYnBJVkk0eDlxd1ZhYlk3YkE9PQ==
Z0FBQUFBQm9DemFzNzV5Q1VlZFRQX3VBMUxfMTVRWjN4V2lSM3pxMEU0UDVSR2QzelJNbVkzb2VKX1ZQdUZ1S2s4U0hCTWJCQlduX2hwSHVLdTZLWU5FUU04WDhRUU8xRjR5QWg1Y2tTUjB1RHRFRGY4UDhqQmVVWDYwZ0poUFN1MjIxS1dlQWtxMXRycWdtUHhmZk5mYjlVQ283cEEzMGZHcDZ4LXlfZHM2cUxSYlFhaDBrUl9KWG9PUFduV0xtYmxRVTZSRnJFeFktcmFZMFl4bjZMeWVwT1FvX2x4OEJ0QT09
Can someone please explain what this is about?
r/bittensor
comment
r/Bittensor
2021-09-02
Z0FBQUFBQm9Delo1Mk16bXJpRlFma3c3NFVVd0xEVEdJNGpEWTBVeG9wbjUydDNhZUl2cHFSNTdEVFNPZWVMUTlYckRhR0xVU0tMdlpWei1WUEdDNkNYcl91VWRrdmhGR1E9PQ==
Z0FBQUFBQm9DemFzM1o2Um5Wc0NmcktuUjVBaHQ4VEt4NUp1S2ZvOVFfaG1WUjBqdTNta2MyZHNxaGsxZlJTTVN1Sy1ZLTdPS2t4LUNzek8zSU5kMG9qeGVpUG9uWkhKMFJoRmxqUTUyTU9XdlNiWDF2UW5DR1lzTllXcTU2bHJrRTVKM2hKd2t6YzhUcmJkSWVCQ0FqWmNKaUpOdTljZFFRYlc0ZG5DZmhKV0tWTWpiUmZFanBzPQ==
*Ah, travelers! We don't get many such as you in these parts, not since the Marquis' men took control of the pass. I suppose you're wondering why you can't post images or links on this Fifthday?* #Thursdays are Text-post Only Days on /r/DnD. We're disabling picture and link posts for 24 hours to encourage discussion posts. We originally began this trial [about six months ago](https://www.reddit.com/r/DnD/comments/ngao86/starting_tomorrow_may_20_thursdays_will_be_art/) and the response has been overwhelmingly positive. I've personally enjoyed a lot of the conversations that have sprung up on these days (and a smarter mod would have bookmarked some of them to use as examples* in this post). As of now we're planning on keeping the experiment running indefinitely. We're always looking for feedback, so please let us know of your experience. Have you been enamored with a discussion post that arose one Thursday? Have you mourned having to wait one more day to see your comic update? We welcome all takes. The switch is still happening manually, so it will happen around about midnight Eastern US time. If anyone is aware of a way to automate the process, please [message the mods](https://www.reddit.com/message/compose?to=%2Fr%2FDnD). *Perhaps you could discuss this...we've heard tale of a path through the eastern ridge. If such a trail exists we could circumvent the Marquis' blockade and supply this rebellion. Won't you help us, strangers!?* --- \* The first Thursday after making this post, someone posts [the most classic question imaginable](https://www.reddit.com/r/DnD/comments/qwudav/if_you_could_cast_one_spell_5e_irl_from_any/). This is what it's all about.
r/dnd
post
r/DnD
2021-11-18
Z0FBQUFBQm9Delo1T1puclJRWEF3ZjRWM1pNbmo5aEg5VW9ETDhsX01DaEhPYnR4eHo2WDhZS2k2aktBejBEWmZwc3V6Q3daay05bFlrV183NVNXdUJzdEJfNXllR1JPSXc9PQ==
Z0FBQUFBQm9DemFzU2VPUHQxWlVXcU83SGN4aVQ3b2pCRHBoNlRTQ2ZEam5nNlJ4VnRGc3dPdVJzRjZJdEtZWmNoTUlDNjczTnlQSVlhUzZBSFZlaU0xbmFJRVZ2aHNvcHJIc0EweGVxemp3T2s4MUtsT2dXWG1TNE5zaFE0MHA4d0xpRXF3cWhvT3Z4c1M5MmcwNFN5eDVjNlJoamRiNkh6OU42SEVlRW9zclhPSDQ2WWtkODhYVVRmQjNka2JXMjRlX3pLNmF3Ulhk
I hope this post is of interest to the members of this community. The International Conference “AI for People: Towards Sustainable AI” (CAIP’21) is taking place online from 20-24 November. The conference aims at bringing together Academics and the general public to discuss Sustainable AI. See the program and join the conference at: [https://aiforpeople.org/conference/](https://aiforpeople.org/conference/)
r/aiethics
post
r/AIethics
2021-11-21
Z0FBQUFBQm9Delo1ZDBvSjR6OGtrWEh0WFF3NTRvQVNxT0doMFZfU1hqcG1XSFZCa3RzU1FDUDhPY1lsdlZNeTNacHhFR2ttODU0VUJ1aE9XcEQzbUN2Um5YZDhzb2REckE9PQ==
Z0FBQUFBQm9DemFza05PN0tGMjJIT1dwUFFIZmloR0lFRVRhTTk3ZGpHWEdDNHZZcGxDZGI5TkxqY01JWVJUSm1sSlliZzF2aU5fdlVzS1lOdlk0elFZSG94NWxjejNMS3VzLS1hZDJKRDRUTVhQdUJYc0pYMjVDRm43Y0otUzYyTTNnZlhaMDMyYXc0QklKNkN2a1RJZnBuRHc0WU5LSVVaQlFXVGR6ZnNvb3R1SjE2SVNXelhwX2hoRjdCZkt2VW5UWHZHTlBIMTBqcWVibnRMaFlqcWg0OFNKQ3dVNHhxdz09
Welcome to the [Tekken Dojo](/r/Tekken/about/wiki/tekken-dojo), a place for everyone to learn and get better at the wonderful game that is Tekken. ## Beginners should first familiarize themselves with the [Beginner Resources](/r/Tekken/wiki/beginner-resources) to avoid asking questions already answered there. Post your question here and get an answer. Helpful contributors will be awarded [Dojo Points](https://www.reddit.com/r/Tekken/wiki/tekken-dojo/dojo-leaderboard), which can make them Dojo Master at the end of the month (awards a unique flair). Please report unhelpful contributors to ensure the dojo remains a place dedicated to improvement.
r/tekken
post
r/Tekken
2021-11-30
Z0FBQUFBQm9Delo1NGxnb2xUamJKMnJlbGtjZWN1NDh5dHVoLXVjRVBiTWtvdERXazh1WUZZQ2dPTEtITFN0ckItQ1B1ZnByc05fek5NSlZCUU9BQUtXd0swekFfbnNqcEE9PQ==
Z0FBQUFBQm9DemFzY1JNX3ptTXNJN3VNaU42My1uLVNjeFNFajhqaWxRcEV4a1A4clpFSE5sdUt0dnFkYnhkYjBhZEk5cmdnZjNtTFhIM3NLQUJZVVlBblZtUTVXLTNRYzN2WDFjZ0FaUHFkZVV1dUhSa1BkZ3BldkRWbFc5Q1RHc3E0bmdBU2hUMjh2MEtoZ3hXX3N4Q1BiR1pIdXF1SlRIVFhBRWxVSm5Rd19lTHBwWThsQWo0PQ==
Hi all! I'm working on a paper about measuring algorithmic fairness in cases where you don't have direct access to demographic data (for example, if you want to see whether a lender is discriminating against a particular race but the lender is not collecting/releasing race data of loan applicants). If you have \~10 minutes, it would be a great help to hear from this community on whether/how often you have faced this issue in practice and what you think should be done to mitigate. Survey link is here: [https://cambridge.eu.qualtrics.com/jfe/form/SV\_e9czBBKDitlglaC](https://cambridge.eu.qualtrics.com/jfe/form/SV_e9czBBKDitlglaC)
r/aiethics
post
r/AIethics
2021-12-28
Z0FBQUFBQm9Delo1dzhJTGMxV1VMcW9CU3IwR1M1MFRXUUJ1VUFJYndmMWN4aFVLUEpqTGR4MHpMZDl5TkJ1UERiYjUzQlZCOFE0bldMUmlKc3dTVjlYdTcxMGxjTk9NOFE9PQ==
Z0FBQUFBQm9DemFzUGpHaERfMGs2bjBtc1d5eHc0d09aYWtaaFdZMnluZGREUG1RZmQwQTRKSFRBcVVTeEo4cXZRZjZ0c09oUkVlS21mWk1JRGcxdmkzVmNFb0E1VjBfR3ZwNEJBRWNTR21fRDlvSmY4WGU4Wm1mdkYyWXZfajZtb01yNW94WGJkM091Ul9JWWR6My1wSnlGQTRTcmUyc3VPM0dOX0ZGYUFwLVlGY0E0c29CcE1CdGRCMy1DcHdxckJkOXo5aHF6V1R2TG9scW1Qb25WR3RPUjhXVzVGRW9sdz09
I just found out a local shitty bank is a publicly traded stock with a 2 billion dollar market cap. And I’d like to short it. My plan is to withdraw cash like 100$ from them and deposit it with a different bank then transfer it back to them and withdraw the same 100 $ until they run out of physical cash. I would then go around and let people know that when I tired withdrawing money from them that there was no cash to withdraw. This in turn should cause a bank run and I’m assuming a decent amount of people would close their accounts leading for the stock price to fall. Puts are extremely cheap and I would love for this bank to go out of business or lose public trust. HAs anybody tried this method before? Are there any REAL downsides?
r/wallstreetbets
post
r/wallstreetbets
2022-01-08
Z0FBQUFBQm9Delo1WGktMlFpejEtYlR0NFI5RzRyc3J1anVPcFhlSTYwZFM2ZXRaTnQ3WkJ4T2Y4MHdrbXFQcWlEcTFUMFNSUExJZWVnUDBVX0FRMnJvUDltNkJDdF9YdFE9PQ==
Z0FBQUFBQm9DemFzY0xXTE93b3ZPdHVzN0ZFVnlzeHduRngwNFJSM1REb3RZcE5mVlN5eER5RUVxakVaQnhlRVZ5YjdHTzQ2T0pFMEphSUc2YUJuNVFhQjQxYndiRGFEb3RYdUVmb0pEVGR0a2hHbHh2c3ItNVN1RHVKM0lEZ1Z4SVQxYnY3V3VNX2NkTC16MFJZVXIzWjFXSW1oTldmQ2tvQWpZOFhNN05uSUJnbXlaQTlaYXZZNFQyYlhjS3dTb3piZDhCOVZvZFVTV1Fqci1WWkZfYzM2RW9ycWhNTW9Kdz09
Henlo frens! Good to see all of you here UwU. Grab on to your bodypillows, I have a smol announcement about the purpose of this subreddit. This community is meant as a fun, lighthearted place where we can commiserate with each other about those annoying little irritations that hinder our day to day enjoyment in life. That means that suitable posts here can be about my children. And things like a wall socket or tile being placed out of allignment. A crack in a phone screen. Duckling shit on your new car. Incomprehensible software. Mismatched buttons. You know, the little things. This subreddit isn't meant to incite rage mobs that go after people. For that reason we say: #No reddit meta posts No posts about being banned from subreddit. No posts about up- or down-votes. No posts about shitty moderators or users or subreddits. No posts about reddit. All jokes and tomfoolery aside, that sort of thing gets us in trouble with site admins. If we allow one type of post about reddit it then very easily moprhs into allowing posts that directly call out other subreddits or users, we just can't allow any of it. That rule already existed for years and we have just made it more clearly visible in the sidebar on old and new reddit. We're gonna be a little strict on it for a bit I'm afraid. 🥺 Thank you all for being awesome and have a very Merry Christmas! Celebrate Christmas in the traditional European way, with a suasage roll! https://imgur.com/gallery/K42ajAZ
r/mildlyinfuriating
post
r/mildlyinfuriating
2022-01-11
Z0FBQUFBQm9Delo1N05fR3hyaUhtOFdYOG8tbVFMSndCaFBpZGY5RWVMbS1JbFlwVGo1TTVISEtiY3BUWnBjei1mSmExZUtqVGFXaG94Ul9aOGxTS3BBZEJGSEVLeVNBTnc9PQ==
Z0FBQUFBQm9DemFzZTlibjBvRHV6RF9xcldpV1lqR2VHNjN6cXdlYkl5dEhhSlJFbzFnSmtUYkpPUExjNEY5cDhWOGpoMk53bkV0eEZOODBIeWY1ZTVLazVWUTVQRHhhVjEzY2x2T0wzNGZkQWxQSC1ydEl0TzlCRmhvSXRsYU01dmVFREdON25PeGowZURqdTBPanZMVlVXLU5nZGtyaDlBS1NKbi1IOTJNRHA1Qng0OFh0QzNvU25xUEZoNEhMRTBYUGFkTVZ1ZFhq
End of preview. Expand in Data Studio

Bittensor Subnet 13 Reddit Dataset

Data-universe: The finest collection of social media data the web has to offer
Data-universe: The finest collection of social media data the web has to offer

Miner Data Compliance Agreement

In uploading this dataset, I am agreeing to the Macrocosmos Miner Data Compliance Policy.

Dataset Summary

This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.

Supported Tasks

The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:

  • Sentiment Analysis
  • Topic Modeling
  • Community Analysis
  • Content Categorization

Languages

Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.

Dataset Structure

Data Instances

Each instance represents a single Reddit post or comment with the following fields:

Data Fields

  • text (string): The main content of the Reddit post or comment.
  • label (string): Sentiment or topic category of the content.
  • dataType (string): Indicates whether the entry is a post or a comment.
  • communityName (string): The name of the subreddit where the content was posted.
  • datetime (string): The date when the content was posted or commented.
  • username_encoded (string): An encoded version of the username to maintain user privacy.
  • url_encoded (string): An encoded version of any URLs included in the content.

Data Splits

This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.

Dataset Creation

Source Data

Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.

Personal and Sensitive Information

All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.

Considerations for Using the Data

Social Impact and Biases

Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.

Limitations

  • Data quality may vary due to the nature of media sources.
  • The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
  • Temporal biases may exist due to real-time collection methods.
  • The dataset is limited to public subreddits and does not include private or restricted communities.

Additional Information

Licensing Information

The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{lesnikutsa2025datauniversereddit_dataset_64,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={lesnikutsa},
        year={2025},
        url={https://huggingface.co/datasets/lesnikutsa/reddit_dataset_64},
        }

Contributions

To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.

Dataset Statistics

[This section is automatically updated]

  • Total Instances: 6560487
  • Date Range: 2015-02-24T00:00:00Z to 2025-04-26T00:00:00Z
  • Last Updated: 2025-04-26T19:44:36Z

Data Distribution

  • Posts: 3.28%
  • Comments: 96.72%

Top 10 Subreddits

For full statistics, please refer to the stats.json file in the repository.

Rank Topic Total Count Percentage
1 r/AskReddit 737163 11.24%
2 r/AITAH 250205 3.81%
3 r/wallstreetbets 222809 3.40%
4 r/AskUS 214153 3.26%
5 r/AmIOverreacting 187961 2.87%
6 r/politics 143410 2.19%
7 r/nba 129421 1.97%
8 r/marvelrivals 128864 1.96%
9 r/mildlyinfuriating 126835 1.93%
10 r/teenagers 124478 1.90%

Update History

Date New Instances Total Instances
2025-04-25T07:30:40Z 5268648 5268648
2025-04-26T01:38:39Z 709502 5978150
2025-04-26T19:44:36Z 582337 6560487
Downloads last month
119