audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | 638f418c8464 | 2018-09-18 | 2018-09-18 20:55:34 | 2018-09-18 | 2018-09-18 20:57:03 | 1 | false | en | 2018-09-18 | 2018-09-18 20:58:20 | 1 | 10007d3018fe | 0.958491 | 2 | 0 | 0 | A major private IT company implements blockchain, artificial intelligence, and Internet of Things to optimize and improve high technology… | 5 | Private Business, Government and Blockchain
A major private IT company implements blockchain, artificial intelligence, and Internet of Things to optimize and improve high technology workflow. The representatives of a major state structure from the same country like this experiment so much they decide to use it in their work and conclude an agreement with the IT giant. This is an ideal example of interaction between private business and the state regarding blockchain, don’t you think? What is even better is that this story is real: in South Korea a local customs office has signed the respective partnership agreement with Samsung. I believe that the near-term development of blockchain will be built on just such examples of cooperation. In a world where all the best technological decisions are copied at supersonic speed, one cannot remain behind the trends for long. That’s why I’m confident that blockchain and other crypto technologies will soon be adopted around the world. In the 21st century it would be strange to go searching for a telephone booth to make a call, when you can do so from anywhere on the planet with one click on your gadget.
https://www.coindesk.com/korea-taps-samsungs-blockchain-tech-to-fight-customs-fraud/
| Private Business, Government and Blockchain | 100 | private-business-government-and-blockchain-10007d3018fe | 2018-09-18 | 2018-09-18 20:58:20 | https://medium.com/s/story/private-business-government-and-blockchain-10007d3018fe | false | 201 | ICOBox is the first and the biggest new generation Blockchain Growth Promoter and Business Facilitator for companies seeking to sell their products via ICO crowdsales. | null | icobox.io | null | ICOBox | icobox-io | BLOCKCHAIN,ICO,ETHEREUM,ETHEREUM BLOCKCHAIN,TOKEN SALE | icobox_io | Blockchain | blockchain | Blockchain | 265,164 | Anar Babaev | null | f1ad85af0169 | babaevanar | 450 | 404 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-07 | 2018-01-07 17:04:37 | 2018-01-07 | 2018-01-07 17:06:29 | 13 | false | en | 2018-01-07 | 2018-01-07 17:18:38 | 24 | 1000c43bcb97 | 19.716981 | 0 | 0 | 0 | Introduction | 5 | EPQ draft 1 (4844 words)
https://upload.wikimedia.org/wikipedia/commons/1/1f/Sanko_Seisakusyo_%28%E4%B8%89%E5%B9%B8%E8%A3%BD%E4%BD%9C%E6%89%80%29_%E2%80%93_Tin_Wind_Up_%E2%80%93_Tiny_Smoking_Spaceman_Robots_%E2%80%93_Close_Up.jpg
Introduction
Automation is set to un-employ people at a scale and rate never seen before, while simultaneously changing societies very nature on an epic scale; to mitigate its impact we must undertake projects and policies that push ourselves, humanity, and society to its limits. The future could take one of two shapes; a utopian wonderland where everyone is happy, or a dystopia where algorithms and machines run the world for maximum efficiency leaving humanity in the slums. However, despite the impending danger, we are seemingly unaware of it. This is because automation creeps in slowly, so we don’t notice it, only once the teamsters, lawyers, and the CEO’s start to lose their jobs will we notice the extent of what we have created. This is nature of automation you don’t notice until it’s your job, your income, your life that is affected.
However, we might hope that our governments have foreseen this danger and are planning how to avoid it. Yet we do not see any real indication from the government (as of time of writing) that they need to do anything to prepare for the unemployed masses. On the contrary in the 2017 autumn budget the government announced that they wanted to see fully driverless cars by 2021 (HM Treasury, 2017). This is even though a CGPS study showed that over 4 million jobs will be lost to driverless cars in the US (Center for Global Policy Solutions., 2017). IN this report I aim to investigate three main key factors that need to be addressed by both policy makers and the public, to prepare ourselves for the future. These are; Too what extent will automation occur, how will this automation effect society and how should we mitigate its impact? By answering these three questions I hope to bring further clarity to a pressing issue that will affect society from top to bottom.
Too what extent will automation happen
The first question that must be answered in this topic is how much automation will occur. There have been many studies that have attempted to estimate this, and their findings have varied considerably. The most recent study is one by PWC (Berriman & Hawksworth, 2017). By analyzing the other studies and improving upon their methods, they conducted a new survey which concluded that.
“Our analysis suggests that up to 30% of UK jobs could potentially be at high risk of automation by the early 2030s, lower than the US (38%) or Germany (35%), but higher than Japan (21%).” (Berriman & Hawksworth, 2017) I have converted this data into a graph below along with other recent studies.
As we can see from the graph the results of automation studies vary greatly. The original study by FO classified jobs as automatable or not by looking at whether most of its tasks could be automated, this meant that they could develop an algorithm to predict what jobs could and could not be automated. AGZ on the other hand, claimed a job was only automatable if all its tasks where fully automatable. This however leads to a vastly reduced number of jobs being classed as automatable. If a job has ten parts and five can be automated surely you can still fire half the workers and maintain the same output. This meant that the results of the AGZ study underestimate the likely impact of automation. For this reason, I have chosen to use the most recent study (PwC) to base my study off.
Will new jobs be created?
If we can have expected jobs to be lost surely, we should also expect new ones to be created to take their place? However, despite this logic the data suggests otherwise see the graph below (Durden, 2017).
As we can see quite clearly although rig count and therefore oil production has increased, the number of employers has stayed almost the same. This indicates that we are producing more oil with less and less energy required. What’s more interesting if you look at the percentage of eligible workers employed over time (Gross, 2016).
As you can see from the graph, the percentage of people employed increases until a recession; at this point employment drops as business make cuts and increase automation, employment regains and then falls. Overtime this cycle reduces peak employment. Furthermore, we can see that the greater the recession the more jobs are irretrievably lost. Currently many believe that we could be in one of the biggest bubbles of all time. The crypto bubble. This term refers to the presumed bubble of cryptocurrencies such as bitcoin and Ethereum. The graph below shows the price of bitcoin over the last year alone (Coindesk, 2017).
It is widely assumed and even accepted that the crypto bubble will be the biggest of our lifetimes and that at some point it will almost certainly burst crashing everyone else with it. This conclusion is mainly drawn from its similarities to the dotcom bubble which burst in 2000–2002. In comparison to the crypto bubble the dot com bubble is expected to look almost reasonable. However, many technology enthusiasts point out that this is okay seeming as we do now all use the internet (Katz & Verhage, 2017). Either way if this is a bubble then we should expect to see job losses in the extremes, jobs that we should not expect to return.
Finally, we must consider whether the new sectors will produce new jobs. Technologists often argue that despite jobs being automated its fine because new jobs such as software developers are being created. And yes, this is true. Whilst it would be unthinkable for the jobs ‘Computer Software Engineer’ or ‘Computer Programmer’ to have existed back in 1980 and now there is 1,300,000 of them. Does not deny the fact that one team of software engineers eleven can design, build, and deploy the next ‘killer app’ within two years and walk away with one billion dollars from its sale. This is the story of Instagram (BBC News technology, 2012). This is a classic case; one we can expect to see much more of. It demonstrates that you no longer need tons of workers to make tons of money. So yes while a few new jobs will be created, we should not rely on these new jobs to support people.
In conclusion we can expect to see lots of job losses within the next 30 years. This is due to the combined effect of automation enabling more jobs to be automated and an impending recession that will drive business to make cuts and improve their business efficiency, with an increasingly small number of people required to make a business successful. We can expect to see unemployment rise to 30% by 2030 and possibly even as high as 50% by 2050.
Why do we work?
Throughout all of time humanity has been in a constant struggle to survive. From stone age man hunting the mighty mammoth, to office workers hunting the mighty pay rise. Humans have always had to strive to survive. Never have we been given the opportunity to simply have it provided to us, although yes, we do get our sustenance easier now than ever before we still must work to get it. But what if we didn’t?
There are three main points of view on the meaning of work. Some people believe that we work because it gives us meaning, that without it we would be aimless with no purpose. The other group say that we work simply for the money and that if that wasn’t an issue we could quite happily live our lives doing what we really wanted to do.
Some people argue that we work to give meaning to our lives. If we did not work, we would all very quickly turn to violence and crime. To find out more about this I conducted a survey of 249 random subjects.
From this we can see that people are clearly divided on the topic. But I was asking about work in general. If, however you ask someone whether their job has meaning you get a very different response. This is demonstrated by a 2015 YouGov poll. The poll showed that 37% of people do not believe that their job is contributing to the world (Yougov, 2015). This is a shocking statistic and makes us wonder quite what jobs these people are doing that they their jobs as pointless.
By analysing the data in the survey, we can conclude that working class people are more likely to believe that their job is not having a meaningful contribution to the world than middle class people. We can also see that some areas have significantly lower levels of meaningfulness than others such as London. Despite the elevated levels of meaninglessness in London less people said that they would be “not proud to tell a stranger what their job was”, unlike in Scotland where there are elevated levels of meaningfulness and elevated levels of shame.
So, we can conclude that people in lower economic brackets are more likely to see themselves in pointless jobs. We can also see that people in areas of high population concentration such as London and the north are more likely to be not fulfilled by their job.
In August 2013 David Graeber wrote an influential article for STRIKE! Magazine (Graeber, 2013).In this article he argued that many modern jobs are ‘bullshit jobs’. They point out that in 1930 John Maynard Keynes (arguably the capitalist equivalent to Karl Marx), predicted that by the centuries end that developed countries such as Great Britain would be so technologically advanced that people who lived there would on average work only 15 hours a week. And yes, as predicted most of manufacturing jobs have been automated yet despite this we have not achieved the 15-hour week. Graeber argues that this is due to the creation of ‘bullshit jobs’. There has been a massive explosion in the services/administration sector. In fact, between 1948 and 2011 the services sector in the US has
Figure 3: https ://www.economist.com/news/briefing/21594264-previous-technological-innovation-has-always-delivered-more-long-run-employment-not-less
gone from 45% of total employment to 68% of total employment (not including government jobs) (The Economist, 2014)
The new services sector comprises many jobs such as:
· Financial services
· Telemarketing
· Corporate law
· Academic/health administration
· Human resources
· Public relations
These are what Graeber proposes are ‘bullshit jobs’. A bullshit hob is one that provides little or no meaning to society and the world. And yet even though the people doing these jobs find them pointless they continue to do them. And what’s more they continue to be created.
Figure 4: https://www.vice.com/en_uk/article/yvq9qg/david-graeber-pointless-jobs-tube-poster-interview-912
If bullshit jobs are pointless why are they created? Many would argue that society creates jobs to ensure that they can continue to partake in society. Some would argue that because of this if people did not have to work to have a good enough income to live on then they would not work. They argue that instead they would spend their time doing things they enjoy and getting the education required to do interesting jobs such as medicine or teaching. This is backed up by universal basic income studies. A universal basic income is a guaranteed income that is paid to all eligible members of society. This is often done by a negative income tax; this is where after earning below a certain point the state stats to give you a guaranteed income. Most importantly however this payment has no strings attached. This means that if people want to then they can and can do no work and just live of benefits. However, the statistics from the studies do not show that this happens. In 1974 a basic income study was carried out in Manitoba (Canada); it showed that people barely reduced their working hours, and those that did used it to spend more time with their families and or taking additional classes reaping untold benefits for the economy (Hum & Simpson, 1993).
Many argue that even if automation does occur then people could continue to do jobs that give them meaning if they wish. Just because a job could be done by a robot does not necessarily mean it will be. If people find meaning in work, then they can continue to do so. However, if your job is mind bogglingly boring then why should you have to do it if you don’t want to? As we enter the new automated age then we are going to have to realize that we should have fun in life and if that means not working then so be it. But the clear majority will find something to do be it inventing, painting, or pushing the boundaries we must accept that our society will change to accommodate our new-found freedom.
How can we mitigate its impact?
Working on the dual assumptions that; soon robotic automation will increase so that 30% of jobs become automated (with not enough being created to replace them), and that in our current state if we get rid of work then there would be large increases in crime and violence. We can conclude that preemptive measures need to take preemptive measures to mitigate the impact. I have split these preemptive measures into two main types.
Only by combining a variety of government policies and regulation with a collective societal move towards less work based system we can ensure that minimal damage is done. This is the main subject of this report. I will first discuss potential government policies and then the action that society must implement to make the most of automation.
Government policies and responsibillities
Government policies come in the form of taxes, benefits, regulation, or programs. A tax is designed to incite a behavior using negative reinforcement i.e. persuade someone or a company to do something otherwise they will lose more money. Benefits give money to people (typically working-class people), this provides them with an income to survive even if they lose their jobs. Regulation prevents the development ‘bad robots’ such as terminators. Programs run by governments help to retrain people to get them new jobs by giving them new skills such as programming.
Tax
The tax I am investigate is a robot tax. A robot tax is a system where cooperation’s are taxed depending on how much of their workforce is automated. For instance, if you were a company that ‘employed’ a robot corporate lawyer you would pay robot tax equivalent to the income tax a corporate lawyer would have paid. This money could be used to fund other government initiatives such as new benefits and retraining programs (Varoufakis, 2017). Proponents for the tax are wide ranging and include tech giants such as Bill Gates (Gates, 2017) and futurists such as Elon Musk (Musk, 2016). However, some people such as Estonian politician Andrus Ansip believe that this is a bad idea (Ansip, 2017). It is argued that it would be difficult if not impossible to calculate the equivalent wage that the robot would have earned if a human where doing the same job. Furthermore, it is argued that this would reduce innovation as it would stop companies automating jobs, this is bad as some jobs a very dangerous and it is ethical to automate them even if it means someone loses their job (Isaac & Wallace, 2017).
Benifits
A common suggestion to mitigate the impact of robotics is the implementation of a new benefit called a Universal Basic Income (UBI), it is also known as basic income (BI), citizens income (CI) and negative income tax (NIT). But whatever its name (I shall use UBI) it involves giving all citizens a basic income (except in NIT where it is only the poorest) (Basic Income Earth Network, 2017). It has been studied in many studies in a range of situations for a variety of clients. It is argued that doing so would be cheaper than our current welfare system, this is because there would be very low administration costs. Furthermore, it is argued (and proven in studies) that a basic income gives better outcome than independent benefits (Hum & Simpson, 1993). It is also shown to increase personal development and entrepreneurship as people have a safety floor to stand on to achieve their aims be it setting up a company or training to get into a new profession. This is how UBI solves the issue of automation, it encourages personal retraining and entrepreneurship which in turn provides new jobs and bolsters the economy. Opponents argue that a UBI would encourage crime and antisocial behavior such as drug abuse. However, a report by the world bank that summarized the findings of 30 studies disproved this (Evans & Popova, 2014).
Regulation
One big worry about robots is that they will rise and take over the world. Whilst this may at first seem like an unrealistic and reactionary response to automation. However, these fears are well founded. In 2015 a robot was released by Queensland university of technology that will patrol coral reefs, and autonomously make the decision to kill the deadly crown of thorns starfish that destroys reefs (Dayoub, Dunbabin, & Corke, 2015). Although this application is undeniably good as we need to protect corals; it sets a dangerous precedent. The same technology can easily be expanded into military drones. Drones have long been used by the military, however this has led to sometimes disastrous consequences. The pilots feel detached and say it is like stepping on an ant (Pilkington, 2015). Imagine how much that feeling of detachment will become when instead of pulling a trigger you just must sign a piece of paper to authorize the strike. Despite this and warnings from high profile critics such as Stephen Hawking, Elon Musk, and Steve Wozniak (Future of Life Institute, 2015). As such it is undeniable that we should enact legislation to prevent the development of AI that decides when to kill human to ensure that we do not lose control.
programes
One proposed solution is retraining. This is where people who have been or will be made redundant due to automation are retrained to do new jobs. This retraining is funded by the government or previous employer and is usually in the form of a course or other qualification (Carson, 2015). These types of programs are useful and are a common way to mitigate impact when unemployment occurs on a mass scale. However, the type of unemployment that we will see might not end up being concentrated as it is normally. If all the manufacturing companies fired half their workers, yes there would be a lot of unemployment, but it would be widely dispersed; it is also harder to retrain people when they are dispersed as you cannot just set up one program. Therefore, these new courses will mostly have to be done online. But this again throws up another problem. The jobs that will be created/will not be automated are not manufacturing or laboring jobs, rather ones that require intelligence, independent/creative thinking, and human understanding (see next page) (McKinsey Global Institute, 2017). We can see that the jobs that will be automated the least are all degree level, education, management (less so with this one) and professionals. From this we can conclude that instead of providing standard retraining we need to other degree level retraining. To do this though the new students will have to pay tuition fees which are prohibitively high to some students let alone parents trying to support their own children going through uni who cannot access grants. In short if we want to mass retrain people at a degree level we need to get rid of tuition fees.
Societal action
Currently our society is geared to attain 100% employment. This full employment model creates pointless jobs just for the sake of keeping people working (Graeber, 2013). However, if 30% of people become unemployed this model will quickly fall apart. So undoubtedly retraining programs will appear and retraining some of the unemployed. But a large portion won’t want to be. If you are a lawyer, you’re not going to want to retrain into a teacher or a therapist because they’re completely different fields that wouldn’t interest you. And even if a UBI is implemented then we can’t all be entrepreneurs. This is mostly because it costs a lot less to run a successful company in the modern day. For instance, Instagram was bought for $1 billion, at that time it only had 13 employees (Geron, 2012). As this clearly shows you now need a lot less people to have an even bigger impact than ever before. So, we need to find something to occupy ourselves with.
Interplanatary colonisation
One suggestion is that we apply our newfound technological capabilities to undertaking a great task such as exploring space. This has several benefits.
1. It would retrain people
a. This is because starting a colony will take many new skills from all backgrounds. We could gear the retraining programs to train people to build rockets
2. It would produce employment
a. Yes, it might be much cheaper to build rockets by robot but why do that when you could employ people? On earth we could use the robots to do the mundane tasks that just have to be done such as; mass farming to feed everyone, building homes, treating illnesses.
3. Life would be less likely to be wiped out
a. We might just be the only life in the entire universe. Maybe even all of time. So it would be a real shame if we were wiped out by a single asteroid or a territorial spat or a massive plague. But if we have a self-sufficient colony on another world the chances of ALL of humanity drop to practically zero.
Despite these benefits there are some serous disadvantages. For instance we might accidently create a dystopia such as in Kim Stanley Robinsons Mars trilogy and 2312 (Robinson, The Complete Mars Trilogy: Red Mars, Green Mars, Blue Mars, 2015) (Robinson, 2312, 2013), if we want to avoid this we should ensure that the selection criteria for colonization is not financial but based on ability.
Elimination of the great killers
Throughout human history life has been short and nasty. If you were lucky enough to be born and your mother to have survived the ordeal you lived through roughly 40 grueling years of work to end up dead. By comparison even the poorest person in the first world would not suffer that much. However, many people in LIC’s (less industrialized countries) still live in this Malthusian misery trap. However, we now have the technological abilities to free them. We could use robots to mass farm to feed cheaply people (farmbot, 2018), we could use modified 3d printers with concrete to 3d print houses in areas with high homelessness (apis-cor, 2018), and we can release genetically engineered mosquitoes to crash the population of a certain type of mosquito (Carvalho DO, 2015). All these techniques use the latest in technology and robotics to solve the great problems of the world. However, to deploy we will need to work together with a large human fleet to support it.
A new social order
Automation itself will undoubtedly cause a great in politics. This is because as previously established society will have to change and so will our priorities. And seeming as political order and systems, I descended from those governed as per defined by social contract theory (Rousseau, 1913). However, as our society changes rapidly our systems will quickly unfold and become unsuitable for the modern world. This will inevitably lead to the creation of new types of government such as Futarchy (Buterin, 2014) and liquid democracy (Jochmann, 2012). However, if not properly handled the opportunity maybe seized by the ‘new radicals’ such as Donald Trump and Heinz-Christian Strache (Carswell, 2017). However, if we can seize the opportunity then we have a chance like no other to make a real lasting impact on the world.
Conclusion
Robotic automation will have a wide-ranging effect on society. The predicted levels of unemployment can only be described as catastrophic by today’s standards. To cope with this change, we must find meaning in our lives and our existence. To cope we will take on new and exciting challenges such as founding a Martian colony and becoming more than human. Sadly, though the governments that have the power to enact the decisions required to help humanity cope with the turbulence of change, seem blissfully ignorant of the dire need for discussion and debate on this most important debate.
Bibliography
Ansip, A. (2017, June 2). EU Commissioner Says No to Bill Gates’ Robot Tax Idea. (CNBC, Interviewer)
apis-cor. (2018, January 7). Home apis-cor. Retrieved from apis-cor: http://apis-cor.com/en
Arntz, M., Gregory, T., & Ulrich, Z. (2016). The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis. OECD Social, Employment and Migration Working. Paris: OECD Publishing. doi:http://dx.doi.org/10.1787/5jlz9h56dvq7-en
Basic Income Earth Network. (2017, December 28). BIEN: Basic Income Earth Network. Retrieved from About basic income: http://basicincome.org/basic-income/
BBC News technology. (2012, April 10). BBC. Retrieved from BBC|News|Technology|Facebook buys Instagram photo sharing network for $1bn: http://www.bbc.co.uk/news/technology-17658264
Berriman, R., & Hawksworth, J. (2017). Will robots steal our jobs? The potential impact of automation on the UK and other major economies. London: Price-waterhouse-Coopers LLP.
Buterin, V. (2014, August 21). An Introduction to Futarchy. Retrieved from Ethereum blog: https://blog.ethereum.org/2014/08/21/introduction-futarchy/
Carson, E. (2015, August 3). How workers can retrain for careers in an automated world. Retrieved from ZDnet: http://www.zdnet.com/article/how-workers-can-retrain-for-careers-in-an-automated-world/
Carvalho DO, M. A. (2015). Suppression of a Field Population of Aedes aegypti in Brazil by Sustained Release of Transgenic Male Mosquitoes. PLoS Negl Trop Dis, 1. Retrieved from https://doi.org/10.1371/journal.pntd.0003864
Center for Global Policy Solutions. (2017). Stick Shift: Autonomous Vehicles, Driving Jobs, and the Future of Work. Washington, DC: Center for Global Policy Solutions.
Coindesk. (2017, November 30). Price page, 2017–2018. Retrieved from Coindesk: https://www.coindesk.com/price/
Dayoub, F., Dunbabin, M., & Corke, P. (2015). Robotic Detection and Tracking of Crown-of-Thorns Starfish. Queensland: Queensland University of Technology.
Durden, T. (2017, Febuary 3). Rig Count Surges Again To 16-Month Highs (But Where’s The Oil Industry Jobs). Retrieved from ZeroHedge: http://www.zerohedge.com/news/2017-02-03/rig-count-surges-again-16-month-highs-wheres-oil-industry-jobs
Evans, D. K., & Popova, A. (2014). Cash transfers and remptation goods: a review of global evidence (English). Washington DC: World Bank. Retrieved from http://documents.worldbank.org/curated/en/617631468001808739/Cash-transfers-and-temptation-goods-a-review-of-global-evidence
farmbot. (2018, January 7). Home farmbot. Retrieved from Farmbot website: https://farm.bot/
Frey, C. B., & Osborne, M. A. (2013). THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATION? Oxford: Oxford University.
Future of Life Institute. (2015, July 28). Autonomous Weapons: an Open Letter from AI & Robotics Researchers. Retrieved from Future of Life Institute: https://futureoflife.org/open-letter-autonomous-weapons/
Gates, B. (2017, Febuary 17). Why Bill Gates would tax robots. (Quartz, Interviewer)
Geron, T. (2012, September 6). Facebook Officially Closes Instagram Deal. Retrieved from Forbes: https://www.forbes.com/sites/tomiogeron/2012/09/06/facebook-officially-closes-instagram-deal/#6bed65c61d45
Graeber, D. (2013, August 1). On the Phenomenon of Bullshit Jobs: A Work Rant. Retrieved from STRIKE! Magazine: https://strikemag.org/bullshit-jobs
Gross, B. (2016). Culture Clash. Investment Outlook, 2. Retrieved from https://17eb94422c7de298ec1b-8601c126654e9663374c173ae837a562.ssl.cf1.rackcdn.com/Documents/umbrella%2Fbill%20gross%2FBill%20Gross%20Investment%20Outlook_May%202016.pdf
HM Treasury. (2017). Autumn Budget 2017. London: HM Treasury.
Hum, D., & Simpson, W. (1993). Economic Response to a Guaranteed Annual Income: Experience from Canada and the United States. Journal of Labor Economics, 11.
Isaac, A., & Wallace, T. (2017, September 27). Return of the Luddites: why a robot tax could never work. Retrieved from The Telegraph: www.telegraph.co.uk/business/2017/09/27/return-luddites-robot-tax-could-never-work/
Jochmann, J. (2012, November 18). Liquid Democracy In Simple Terms. Youtube. Retrieved January 7, 2018, from https://www.youtube.com/watch?v=fg0_Vhldz-8
Katz, L., & Verhage, J. (2017, November 27). Bloomberg Technology. Retrieved from Novogratz Says Crypto Will Be ‘Biggest Bubble of Our Lifetimes’: https://www.bloomberg.com/news/articles/2017-11-28/novogratz-says-bitcoin-to-win-out-over-other-digital-currencies
McKinsey Global Institute. (2017). A FUTURE THAT WORKS: AUTOMATION, EMPLOYMENT, AND PRODUCTIVITY. London: McKinsey&Company.
Musk, E. (2016, November 4). Elon Musk: Robots will take your jobs, government will have to pay your wage. (CNBC, Interviewer)
Pilkington, E. (2015, November 19). The Gaurdian. Retrieved from Life as a drone operator: ‘Ever step on ants and never give it another thought?’ : https://www.theguardian.com/world/2015/nov/18/life-as-a-drone-pilot-creech-air-force-base-nevada
Robinson, K. S. (2013). 2312. London: Orbit.
Robinson, K. S. (2015). The Complete Mars Trilogy: Red Mars, Green Mars, Blue Mars. New York City: Harper Voyager.
Rousseau, J. J. (1913). Social Contract & Discourses, Translated with Introduction by G. D. H. Cole. New York: Dutton&Co. Retrieved January 7, 2018, from http://www.bartleby.com/br/168.html
The Economist. (2014, Jannuary 18). The onrushing wave. Retrieved from The Economist: https://www.economist.com/news/briefing/21594264-previous-technological-innovation-has-always-delivered-more-long-run-employment-not-less
Varoufakis, Y. (2017, Febuary 27). A Tax on Robots? Retrieved from Project Syndicate: https://www.project-syndicate.org/commentary/bill-gates-tax-on-robots-by-yanis-varoufakis-2017-02?barrier=accessreg
Yougov. (2015, August 12). Yougov|News|37% of British workers think their jobs are meaningless. Retrieved from Yougov: https://yougov.co.uk/news/2015/08/12/british-jobs-meaningless/
X
| EPQ draft 1 (4844 words) | 0 | introduction-3-1000c43bcb97 | 2018-01-07 | 2018-01-07 17:18:39 | https://medium.com/s/story/introduction-3-1000c43bcb97 | false | 4,854 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | George Sykes | null | 93b9e94f08ca | tasty231 | 6 | 22 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-08 | 2018-03-08 07:04:31 | 2018-03-08 | 2018-03-08 07:07:42 | 1 | false | en | 2018-03-08 | 2018-03-08 07:07:42 | 3 | 100139913e4c | 2.211321 | 0 | 0 | 0 | Various associations in the present days are opening entryways for huge information. So as to open the power, Data Science Training in… | 4 | Ascent of data Science, SAS and Big data Analyst Trainings Programs
Various associations in the present days are opening entryways for huge information. So as to open the power, Data Science Training in Mumbai assumes an irreplaceable part. They have the capacity of driving a plenty of data which exists inside the foundation. An Data Science researcher is helpful with regards to dissecting and preparing information.
A Data Scientist who has an accomplished in the business will fill in as vital accomplice and confided in counselor in the administration of a business association and ensure that the workers upgrade the examination abilities of their own. An information science courses in pune assumes an irreplaceable part in the correspondence and exhibit of the estimation of the investigation of a foundation for the encouraging ad libbed method of taking choices crosswise over various phases of a business through following, estimating and additionally recording distinctive execution measurements.
For what motivation to pick Big Data Analytics?
This Big Data Analytics takes various arranged parts and distinctive capacities that make a motivation from data. Information science preparing in Mumbai is the right calling route for the social occasion of individuals to put in unprecedented demand in the slanting stage.
Course Overview
Data Analytics Training makes the gathering of spectators utilize aptitudes from major to front line level in each and every module to comprehend all the business challenges. Affirmation is passed on to the wannabes toward the complete of Big Data Analytics Course that puts in exceptional demand to secure a work in reputed associations.
Features
• Division and Clustering
• Display Building and Validation
• Machine Learning: Unsupervised Learning
• Characterization Models
• Making an Analytical Dataset
• Feel straightforwardness to arrive a position
What will you understand in this course?
Interminable supply of Big data Hadoop training in Mumbai, hopefuls will lay a magnificent charge over each and every module to go up against genuine challenges.
• Deploying of Data Analysis Life cycle to address gigantic data examination wanders
• Reframing of business challenges as a demonstrative test
• Enhances capacities in various coherent strategies and mechanical assemblies to dismember enormous data and arrangement of quantifiable models and perceiving of bits of learning that can incite huge results
SAS Training in Pune, the course stays for quantifiable examination structure, which is an item game plan used for front line examination inside the work put. It is moreover used for data organization game plans, business learning, judicious examination and that is only the start. It can be an important gadget to empower you to manage your data more feasibly and to empower you to build up your business later on.
According to the present market circumstance, the void of data specialist fulfills with each passing day. However in the meantime, associations scan for the proficient data specialists as the stock system of understudies from this master course limits after a particular point in time. In any case, foundations that give these courses guarantee position straightforwardly after the course gets over. You require not scan for any movement consultancy firm to get a game plan. Associations search for you straightforwardly after you get the confirmation from the establishments.
| Ascent of data Science, SAS and Big data Analyst Trainings Programs | 0 | ascent-of-data-science-sas-and-big-data-analyst-trainings-programs-100139913e4c | 2018-03-08 | 2018-03-08 07:07:42 | https://medium.com/s/story/ascent-of-data-science-sas-and-big-data-analyst-trainings-programs-100139913e4c | false | 533 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | tech data | Tech data Providing Courses Like SaS Training in Mumbai And Pune Hadoop Big Data Training Python Training Blue prism training. www.techdatasolution.co.in/ | 60a3bfd83742 | techdatasolutions18 | 4 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a8fc5dd0676e | 2018-04-16 | 2018-04-16 22:49:09 | 2018-04-16 | 2018-04-16 22:58:07 | 0 | false | en | 2018-04-16 | 2018-04-16 22:58:07 | 2 | 1002a55eca89 | 0.65283 | 1 | 0 | 0 | I discussed this with Michelle Tsng on my Podcast “Crazy Wisdom”. | 5 | Can a robot love us better than another human can?
I discussed this with Michelle Tsng on my Podcast “Crazy Wisdom”.
She says that a robot can love us better than a human being because there is no judgement. Human beings, particularly ones who have been traumatized can subconsciously detect when someone is judging them. They know when to keep their true feelings hidden from people who judge them and thus the best guide they can find is someone who can withhold judgmental thoughts and just express a safe, warm, and loving connection.
As robots become more sophisticated they might be able to provide this loving and warm connection. In this audio clip, Michelle discusses her experiences talking with Sofia, a robotic companion to human beings. She says that soon we will build robots who are better at love than humans are.
What do you think? Would you ever feel comfortable sharing your most intimate experiences or seeking therapeutic treatment from a robot?
You can check out the full interview on my website.
| Can a robot love us better than another human can? | 50 | can-a-robot-love-us-better-than-another-human-can-1002a55eca89 | 2018-04-20 | 2018-04-20 00:47:51 | https://medium.com/s/story/can-a-robot-love-us-better-than-another-human-can-1002a55eca89 | false | 173 | Non-obvious meditation advice from people on the battlefront of daily creation | null | yogastew | null | Crazy Wisdom | crazy-wisdom | MEDITATION,MINDFULNESS,YOGA,CREATIVITY,BUSINESS | stewartalsopIII | Robotics | robotics | Robotics | 9,103 | Stewart Alsop | null | d0481dc55f0e | stewartalsop | 512 | 531 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-20 | 2017-10-20 21:11:41 | 2017-10-22 | 2017-10-22 20:23:57 | 2 | false | en | 2017-12-02 | 2017-12-02 13:31:34 | 17 | 10033db0a000 | 7.055031 | 9 | 0 | 0 | An Active List of Interesting Use Cases Mentioned In Class | 5 | 2017 Big Data, AI and IOT Use Cases
An Active List of Interesting Use Cases Mentioned In Class
Image Source: Randstad Article
I’ve heard more use cases of Big Data in the last 10 days, than ever before. Therefore, I’ve decided to start a post where I compiled all the examples — with additional sources for all of us to learn more about them. I plan on updating this on a daily/weekly basis — so please follow me to stay on the loop.
The Big Data Professors at IE are all working professionals or researchers in the field, so they use countless examples to show us how the concepts taught in class are being applied in the real world.
Use cases will be divided by “function”, but you can expect to see examples of big companies, startups, NGOS, and individuals. The focus is to understand not just the impact, but also the Ripple Effect of AI and IOT innovations.
If you have any use cases that should be added, additional resources, or observations feel free to comment below. I want this to be a reference guide for all!
Use Cases were last updated on: October 30, 2017
>> Solving the Water Scarcity Problem
The UN predicts that half the world’s population will live in a water-stressed area by 2030. Therefore, private and public organizations are coming together to find solutions. Due to the improvement of network connectivity and accuracy of sensors, the challenge seems like an addressable one.Whether it is in major cities like San Francisco, or developing countries like Africa, smart sensors are being installed in water wells and pumps in order to track its quality and quantity. Equitable Allocation of clean water is the main priority for the next decades. It is proven that every $1 spent on water and sanitation generates $8 as a result of saved time, increased productivity and reduced healthcare costs.The current complexity of water systems and budget limitations are the largest obstacle to faster adoption of smart water meters.
Learn more:
WPDx | The Water Point Data Exchange is the global platform for sharing water point data
The amount of water point data being collected is growing rapidly as governments and development partners increasingly…www.waterpointdata.org
Access to data could be vital in addressing the global water crisis
Two hundred miles from the nearest town, a farmer in Tanzania picks up his phone and notices an alert. Thanks to an app…www.theguardian.com
The Internet of everything water
Imagine a world where your spice cabinet reminds you to buy salt, or your cell phone sends a text message about the…www.un.org
>> Detecting Defective Genomes & Saving Lives
Deep Genomics is leveraging artificial intelligence, specifically deep learning to help decode the meaning of the genome. Their learning software is developing the ability to try and predict the effects of a particular mutation based on its analyses of hundreds of thousands of examples of other mutations; even if there’s not already a record of what those mutations do. So far, Deep Genomics has used their computational system to develop a database that provides predictions for how more than 300 million genetic variations could affect a genetic code. For this reason, their findings are used for genome-based therapeutic development, molecular diagnostics, targeting biomarker discovery and assessing risks for genetic disorders.
Learn More:
Company
THE NEXT-FRONTIER GENETIC MEDICINE COMPANY Our mission at Deep Genomics is to create a new universe of life-saving…www.deepgenomics.com
Top Artificial Intelligence Companies in Healthcare to Keep an Eye On - The Medical Futurist
No one doubts that artificial intelligence has unimaginable potential. Within the next couple of years, it will…medicalfuturist.com
>> Training Neurons to Detect Bombs
All of the big tech firms, from Google to Microsoft, are rushing to create artificial intelligence modelled on the human brain. Mr Agabi is attempting to reverse-engineer biology and emphasizes how “our deep learning networks are all copying the brain…you can give the neurons instructions about what to do — in our case we tell it to provide a receptor that can detect explosives.” He launched his start-up Koniku over a year ago, has raised $1m (£800,000) in funding and claims it is already making profits of $10m in deals with the security industry.
Learn More:
The man teaching a computer to smell
Nigerian Oshi Agabi has unveiled a computer based not on silicon but on mice neurons at the TEDGlobal conference in…www.bbc.com
>> Influencing Elections
On November 9, it became clear what Big Data can do. The company behind Trump’s online campaign — the same company that had worked for Leave.EU in the very early stages of its “Brexit” campaign — was a Big Data company: Cambridge Analytica. “Pretty much every message that Trump put out was data-driven,” says Cambridge Analytica CEO Alexander Nix
Learn More:
The Data That Turned the World Upside Down
On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. The 34-year-old researcher…motherboard.vice.com
>> Saving Billions in Energy Costs
The General Services Administration, for example, has found a way to save $13 million a year in energy costs across 180 buildings — all thanks to a proprietary algorithm developed and monitored from many states away, in Massachusetts. Among the problems discovered: malfunctioning exhaust fans. Much of the leaps in energy efficiency are possible due to the widespread adoption of networked and highly sophisticated energy meters around the country over the last 10 years. Energy meters used to be checked onsite once a month, generating 12 basic data points a year, read and logged by humans. Now, meters register a raft of data every 15 minutes, accessible anywhere remotely, generating 36,000 data points a year.
Learn More:
'Big data' is solving the problem of $200 billion of wasted energy
At its best, technology is able to tackle huge problems with remarkable ease. The General Services Administration, for…www.businessinsider.com
>> Predict Wealth from Space
Penny is a free tool built using high-resolution imagery from DigitalGlobe, income data from the US census, neural network expertise from Carnegie Mellon and intuitive visualizations from Stamen Design. It’s a virtual cityscape (for New York City and St. Louis, so far), where AI has been trained to recognize patterns of neighborhood wealth (trees, parking lots, brownstones and freeways) by correlating census data with satellite imagery. You don’t just extract information from this tool though, click on the link below and drop a grove of trees into the middle of Harlem to see the neighborhoods virtual income level rise or fall. What is impressive about this tool is that it doesn't just look at the urban features you add, it’s the features and the context into which they’re placed that matters.
Learn More:
Meet Penny, an AI to predict wealth from space
Penny is a simple tool to help us understand what wealth and poverty look like to an artificial intelligence built on…penny.digitalglobe.com
What is Penny?
A technical guide for the busy CEO
NEEDS EDITING & CLEANUP, ERIC WORKING ON IThi.stamen.com
>> Justifying Billboard Pricing
Outdoor marketing company Route is using big data to define and justify its pricing model for advertising space on billboards, benches and the sides of busses. Traditionally, outdoor media pricing was priced “per impression” based on an estimate of how many eyes would see the ad in a given day. No more! Now they’re using sophisticated GPS, eye-tracking software, and analysis of traffic patterns to have a much more realistic idea of which advertisements will be seen the most — and therefore be the most effective.
Learn More:
How big data is changing outdoor media
Seeing an ad outdoors has a greater impact on us than one served to our laptop or phone. We come across it, 'discover…econsultancy.com
>> Turning Neighborhoods into Farmers Markets
Falling Fruit´s stated goal is to remind urban people that agriculture and natural foods do exist in the city —but that you might just have to access a website to find it. It combined public information from the U.S. Department of Agriculture, municipal tree inventories, foraging maps and street tree databases to provide an interactive map to tell you where the trees in your neighborhood might be dropping fruit.
Learn More:
Falling Fruit
A massive, collaborative map of the urban harvest uniting the efforts of foragers, freegans, and foresters around the…fallingfruit.org
>> Rescue you from under the snow
Ski resorts are even getting into the data game. RFID tags inserted into lift tickets can help optimize operations, collect data on skier performance, personalize offerings to customers, and gamifying the experience. In many cases though, the technology is being used to identify the individual movements of the skiers that get lost.
Learn More:
Even Ski Resorts Are Benefiting From The Big Data Explosion | Articles | Big Data
Even Ski Resorts are Benefiting From the Big Data Explosionchannels.theinnovationenterprise.com
>> Find Lost Relatives
Consider the millions of Ancestry family trees. How valuable would it be to link to those trees via DNA? You’d be able to determine genetic connections and uncover new family lines, deep relationships, and insights like you never have before. The first thing Ancestry.com does with your autosomal test results is compare them with other DNA samples on their database to look for family matches. They compare the over 700,000 markers examined on your genome to every other person in their database. The more markers you share in common with another person, the more likely you are to be related. The probable relationship between any two people is calculated based on the percentage of markers they have in common. Next, they sort the matches by relationship and send you a list of your DNA family.
Learn More:
AncestryDNA™ | Learn How DNA Tests Work & More
Learn more about the science and technology behind our most advanced DNA test at AncestryDNA™.www.ancestry.com
>> Financial Inclusion in Africa
Analysis of mobile phone data can help increase subscribers’ use of banking services, boosting their economic resilience and inclusion.
Learn More:
https://olc.worldbank.org/sites/default/files/WBG_BD_CS_FinancialInclusion_0.pdf
To be Continued…
If you learned something new about Big Data from this guide, please share it with your friends. It is up to us to encourage people to join this field, and be part of building the future.
Speaking of which, I’d love to hear from you. Reach out to me on Linkedin or email at [email protected]✉️ .
#bigdata #ai #iot #machinelearning #startups #digitaltransformations #impact #agile #newworld #graduateprogram #sharingknowledge
Author: Melody Ann Ucros
I’m a Masters in Big Data & Business Analytics Candidate @ IEBusinessSchool, and an Entrepreneurship Evangelist wherever I go. Oh, and I love chocolate! … Follow Me ❤
| 2017 Big Data, AI and IOT Use Cases | 27 | 2017-big-data-ai-and-iot-use-cases-10033db0a000 | 2018-05-07 | 2018-05-07 04:30:55 | https://medium.com/s/story/2017-big-data-ai-and-iot-use-cases-10033db0a000 | false | 1,768 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Melody Ucros | Entrepreneurial Techie who loves helping startups, playing with data & exchanging knowledge with impact-makers around the world. @IEMasterBigData ’18 | c136c563b31f | melodyucros | 195 | 47 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | b54d31a2a99a | 2018-03-30 | 2018-03-30 07:00:59 | 2018-03-30 | 2018-03-30 07:06:33 | 1 | false | th | 2018-03-30 | 2018-03-30 07:09:31 | 1 | 1003fc1d980e | 0.520755 | 0 | 0 | 0 | ในทุกวันนี้ ธุรกิจต่างต้องก้าวเข้าสู่โลกดิจิทัลคลาวด์จึงเป็นระบบพื้นฐานต่าง ๆ ที่คุนต้องทำความรู้จักและใช้ให้เป็น | 5 | Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day 2018
ในทุกวันนี้ ธุรกิจต่างต้องก้าวเข้าสู่โลกดิจิทัลคลาวด์จึงเป็นระบบพื้นฐานต่าง ๆ ที่คุนต้องทำความรู้จักและใช้ให้เป็น
Oracle จึงเป็นตัวช่วยให้กระบวนการนำข้อมูลเข้าสู่คลาวด์ของคุณง่ายดายยิ่งขึ้นด้วยผลิตภัณฑ์ชั้นนำที่ออกแบบมาสำหรับองค์กรโดยเฉพาะซึ่งสามารถรองรับเทคโนโลยีในอนาคต อย่างเช่น AI และแชทบอทอื่น ๆ อีกมากมาย
Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day2018ซึ่งเป็นโอกาสอันดีที่คุณจะได้ใช้ประโยชน์จากนวัตกรรมล่าสุดเกี่ยวกับคลาวด์
วัน : พุธที่ 4 เมษายน 2561
เวลา : 08:30 น. -17:30 น.
สถานที่ : โรงแรมเจดับบลิวแมริออท (สุขุมวิทซอย 2)
ห้อง : แกรนด์บอลรูม ชั้น 3
ภายในงานคุณจะได้รับข้อมูล อาทิ
· ซีเคียวริตี้บนคลาวด์
· วิธีปรับปรุงระบบไอทีให้ทันสมัยและลดต้นทุนและการยกข้อมูลที่มีอยู่อย่างมากมาย เข้าไปไว้ในระบบคลาวด์เดียวกัน
· พบกับความสามารของ Oracle Cloud ที่ช่วยรองรับเทคโนโลยีใหม่ๆ และเชื่อมต่อการทำงานกับแอพพลิชั่นองค์กรที่คุณใช้อยู่ในปัจจุบัน
· เรียนรู้เพิ่มเติมเกี่ยวกับศักยภาพและประโยชน์ของนวัตกรรม Next Generation
· ไขข้อสงสัยกับผู้บริหาร Oracle และผู้เชี่ยวชาญในอุตสาหกรรมท่านอื่นๆ
นอกจากนี้ ยังมีเวทีอภิปรายให้ท่านได้ฟังวิสัยทัศน์ของผู้บริหาร Oracle และผู้เชี่ยวชาญในอุตสาหกรรมท่านอื่นๆ ก่อนที่จะแยกกลุ่มย่อยเพื่อไปฟังสัมมนา ฟังตัวอย่างกรณีศึกษาหรือรับชมการสาธิตผลิตภัณฑ์
ผู้ที่สนใจเข้าร่วมงานฟรี โดยลงทะเบียนในที่ http://reminder.chiq-511.co.th/oracle.phpหรือ สอบถามข้อมูลเพิ่มเติมได้ที่คุณ ชาติรส อินเขตน์ (แก้ม)
โทร. 02–408–8770 อีเมล : [email protected]
| Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day 2018 | 0 | oracle-จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน-oracle-could-day-2018-1003fc1d980e | 2018-03-30 | 2018-03-30 07:09:33 | https://medium.com/s/story/oracle-จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน-oracle-could-day-2018-1003fc1d980e | false | 85 | Enterprsie IT Knowledge for IT Community | null | enterpriseitpro | null | Enterpriseitpro | enterpriseitpro | null | Suwaschai_ITPro | Oracle | oracle | Oracle | 1,707 | Dearraya Naja | null | d40e6591ecfa | dearrayanaja | 22 | 19 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-19 | 2018-03-19 09:55:14 | 2018-03-19 | 2018-03-19 09:55:14 | 4 | false | en | 2018-03-19 | 2018-03-19 09:59:11 | 2 | 1003ff48854 | 2.133962 | 1 | 0 | 0 | If your company hasn’t already considered integrating artificial intelligence or its satellite technologies into its current processes, you… | 2 | Artificial Intelligence is the Next Frontier
If your company hasn’t already considered integrating artificial intelligence or its satellite technologies into its current processes, you might begin to find yourself significantly behind in the game by the end of the year.
Who adopts AI
Currently there are huge amounts of dollars being invested in AI, but studies done by McKinsey Global reveal that adoption is very low. In 2016, the hundreds of surveyed companies invested between $25–39 billion in artificial intelligence. Of the investing companies, 75% were tech giants in the industry. The other 25% were start-ups. That amount has tripled since 2013. Adoption nevertheless is low. 41% of the surveyed companies were uncertain about the benefits that AI could bring them. Only about 20% said they already adopted AI into their company. 40% said they are contemplating it, and only 9% found themselves simply experimenting with it.
When to begin adopting AI
The challenge for new adopters of AI seems to be their current familiarity with the tech world and tech systems integrated into their business processes and workflows.
According to McKinsey’s studies, those companies who adopted AI were already strong in the digital sector (telecommunications, high tech, automotive and assembly, and financial services). Those companies with less adoption were typically in the education sector, health care, and travel and tourism. Early adopters have usually been larger businesses, adopting AI in core activities, focusing on growth over savings, and adopting multiple technologies.
Successful AI adoption experiences
There appear to be five successful transformations that source value for AI adoption in companies. Case analyses have brought about more precise results and diagnoses through AI. AI has been effective at creating data ecosystems for businesses that manage large volumes of data. Applying tools and techniques into established systems has been an added value of AI, as well as developing workflow integration. Ultimately, AI has aided businesses in fostering an open culture and organization.
AI has created value already in smarter forecasting, optimizing production and maintenance, targeted sales and marketing, and providing enhanced user experiences.
2018 will be a year of significant investing in AI. Hopefully companies are looking into cost/benefit analyses and deciding early on how to begin integrating AI into their business.
Originally published at avoncourtpartners.com on March 19, 2018.
| Artificial Intelligence is the Next Frontier | 1 | artificial-intelligence-is-the-next-frontier-1003ff48854 | 2018-03-23 | 2018-03-23 13:51:42 | https://medium.com/s/story/artificial-intelligence-is-the-next-frontier-1003ff48854 | false | 380 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Bill D. Webster | null | b8be538ba286 | bill.d.webster | 4 | 41 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f92990997e09 | 2017-12-13 | 2017-12-13 12:27:55 | 2017-11-27 | 2017-11-27 08:00:00 | 1 | false | en | 2017-12-13 | 2017-12-13 16:10:02 | 1 | 10048798ad6 | 1.818868 | 0 | 0 | 0 | Aidoc, a leading AI startup utilizing deep learning to augment radiologists’ workflow and highlight anomalous cases, which are often highly… | 5 | Aidoc Gets CE Mark for Deep Learning Solution
Aidoc, a leading AI startup utilizing deep learning to augment radiologists’ workflow and highlight anomalous cases, which are often highly urgent, today announced that it received CE (Conformité Européenne) marking for the world’s first commercial head and neck deep learning medical imaging solution. CE marking allows for widespread commercialization of Aidoc’s solution in Europe.
Aidoc’s solution augments radiologists’ workflow through its unique ability to comprehensively detect abnormalities in imaging of both the head and neck, an anatomical area responsible for a major portion of medical images. Providing significant value for day-to-day diagnosis, time saved by Aidoc’s solution could be extremely impactful in trauma cases, where time can be the difference between the patient’s life and death.
Aidoc’s deep learning technology highlights a vast array of medical findings to help radiologists prioritize readings, aimed at facilitating interpretation and reducing time to decision when it matters most. Radiologists can now perform smart optimization of their worklist by prioritizing cases based on AI medical image analysis in conjunction with other clinically available data. Aidoc’s solution is agnostic to radiologists’ incumbent software, integrating seamlessly and providing immediate results.
“The amount of medical imaging — especially CT and MR scans — is increasing dramatically, but the number of radiologists has plateaued, creating unsustainable bottlenecks and making the radiologist’s already complex work even more challenging,” said Aidoc CEO Elad Walach. “Our technology can have a monumental impact augmenting the radiology workflow, aimed at more cost-effective treatment for medical centers and practices, and the healthcare system as a whole. With the CE mark, we have a unique opportunity to update outdated technology for the benefit of hundreds of millions of Europeans.”
The CE marking was based on data collected in clinical trials validating Aidoc’s precision, which compared the solution’s results to unassisted radiologists’ review of those cases. Cedars-Sinai Medical Center in Los Angeles also assessed Aidoc’s solution earlier this year and that study resulted in impressive accuracy in scan analysis.
“In our clinical trial, Aidoc’s technology has demonstrated its ability to enhance our radiologists’ workflow, as abnormal scans can be prioritized and more carefully reviewed,” said Dr. Barry D. Pressman, MD, Chairman of Imaging at Cedars-Sinai Medical Center. “Our firsthand experience has led me to believe in the technology’s potential to achieve a significant increase in our radiologists’ productivity and accuracy. It’s a win both for our physicians and our patients. Aidoc’s AI powered solution will help our radiologists be their best, and streamline their workflow.”
For the original press release, click here
| Aidoc Gets CE Mark for Deep Learning Solution | 0 | aidoc-gets-ce-mark-for-deep-learning-solution-10048798ad6 | 2018-03-15 | 2018-03-15 20:12:15 | https://medium.com/s/story/aidoc-gets-ce-mark-for-deep-learning-solution-10048798ad6 | false | 429 | Partnering with innovative A-round startups | null | TLV-Partners-1619802984967099 | null | TLV Partners | tlv-partners | VC,STARTUP NATION,ISRAEL | tlv_partners | Machine Learning | machine-learning | Machine Learning | 51,320 | TLV Partners | An Israel based venture capital firm focused on Seed and A investments. | 86ffc8e86f07 | TLV_Partners | 52 | 31 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-12 | 2017-10-12 04:15:29 | 2017-10-12 | 2017-10-12 04:26:19 | 2 | false | en | 2017-10-12 | 2017-10-12 04:26:31 | 0 | 10050686d0e4 | 1.005975 | 2 | 0 | 0 | Actually ai is efiicting our world in a very good way, efficient way though it’s gonna kill a no. Of different jobs in the world of today… | 1 | Ai and it’s impact on the world
Actually ai is efiicting our world in a very good way, efficient way though it’s gonna kill a no. Of different jobs in the world of today and it’s gonna automate some of them but it also opens opens up a no of opportunity of different jobs Jobs for people like in the current world it saves us all that complex statistics, Mathematics that we would otherwise have to be doing if we didn’t had ai do that for us and ai also gets us new features like now machines can talk like humans someday we might have machines that are not only machines but they are human like an d that could solve a no of current problem and if you fear an ai apoocylapse so you should not if you have usedsny of command line, server side languages or you are a programmer you might have known by now that machines are way dumber that humans.
| Ai and it’s impact on the world | 3 | ai-and-its-impact-on-the-world-10050686d0e4 | 2017-10-27 | 2017-10-27 10:41:44 | https://medium.com/s/story/ai-and-its-impact-on-the-world-10050686d0e4 | false | 165 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sameep Yadav | Neural Networks,Machine learning, data mining. | cb175eceafb0 | SameepYadav | 5 | 14 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d777623c68cf | 2016-12-24 | 2016-12-24 08:07:26 | 2016-12-24 | 2016-12-24 13:07:10 | 11 | false | en | 2018-09-01 | 2018-09-01 17:08:04 | 12 | 10062f0bf74c | 5.684906 | 80 | 2 | 0 | The model for deep learning consists of a computational graph that are most conveniently constructed by composing layers with other layers… | 4 | The Meta Model and Meta Meta-Model of Deep Learning
Credit: Inception (2010) http://www.imdb.com/title/tt1375666/
The model for deep learning consists of a computational graph that are most conveniently constructed by composing layers with other layers. Most introductory texts emphasize the individual neuron, but in practice it is the collective behavior of a layer of neurons that is important. So from an abstraction perspective, the layer is the right level to think about.
Underneath these layers are the computational graph, it’s main purpose is to orchestrate the computation of the forward and backward phases of the network. From the perspective of optimizing the performance, this is an important abstraction to have. However, it is not at the ideal level to reason how it all should work.
Deep Learning frameworks have evolved to develop models that ease construction of DL architectures. Theano has Blocks, Lasagne and Keras. Tensorflow has Keras and TF-Slim. Keras was originally inspired by the simplicity of Torch, so by default has a high-level modular API. Many other less popular frameworks like Nervana, CNTK, MXNet and Chainer do have high level model APIs. All these APIs however describe models. What then is a Deep Learning meta-model? Is there even a meta meta-model?
Figure: This is what a Deep Learning model looks like.
Let’s explore first how a meta-model looks like. A good example is in the UML domain of Object Oriented Design. This is the UML metal model:
Credit: Eclipse ATL project
This makes it clear that Layers, Objectives, Activations, Optimizers, Metrics in the Keras APIs are the meta-models for Deep Learning. That’s not too difficult a concept to understand.
Figure. Deep Learning Meta Model
Conventionally, an Objective is a function and an Optimizer is an algorithm. However, what if we think of them instead as also being models. In that case we have the following:
Figure. Make everything into networks
This definitely is getting a whole lot more complicated. The objective function has become a neural network and the optimizer has also become a neural network. The first reaction to this is, has this kind of architecture been tested before? It’s possible someone is already writing this paper. That’s because an objective function that is a neural network is equivalent to the Discriminator in a Generative Adversarial Network (GAN) and an Optimizer being a neural network is precisely what a meta-learner is about. So this idea is not fantastically out of mainstream research.
The second reaction to this is, shouldn’t we make everything neural networks and be done? There are still boxes in the diagram that are still functions and algorithms. The Objective’s optimizer is one and there are 3 others. Once you do that, there’s nothing else left that a designer needs to define! There are no functions, everything is learned from scratch!!
So a meta-model where everything is a neural network looks this:
Figure. Deep Learning Meta-Model
Where the mode is broken apart into 3 parts just for clarity. Alternatively, it looks like this:
Figure. Deep Learning Meta-Model
What this makes abundantly clear however is that the kinds of layers that are available come from a fixed set (i.e. fully connected, convolution, LSTM etc.). There are in fact research papers that exploit this notion of selecting different kinds of layers to generate DL architectures( see: “The Unreasonable Effectiveness of Randomness” ). A DL meta-model language serves as the lego blocks of an exploratory RL based system. This can generate multiple DL meta-model instances to optimize for the best architecture. That is a reflection of the importance of Deep Learning Patterns. Before you can generate architectures, you have to know what building blocks are available for exploitation.
Now, if we make a quantum leap into meta meta-model of Deep Learning. What should that look like?
Let’s look at how OMG’s UML specification describes the meta meta-model level (i.e. M3):
https://en.wikipedia.org/wiki/Meta-Object_Facility
The M3 level has a simplified structure that only includes the class. Following an analogous prescription, we thus have the meta meta-model of Deep Learning defined by the following:
Deep Learning Meta Meta-Model
Despite the simpleness of the depiction, the interpretation of this is quite interesting. You see, this is a meta object that an instance of which is the conventional DL meta-model. These are the abstract concepts that define how to generate new DL architectures. More specifically, it is the language that defines the creation of new DL models such as a convolution network or a autoregressive network. When you work at this level, you essentially generate new kinds of DL architectures. This is what many DL researchers actually do for a living, designing new novel models.
There is one important concept to remember here though, the instance, model, meta-model and meta meta-model distinction are concepts that we’ve invented to better understand the nature of language and specification. This concept that is not essentially and likely does not exists in separate form in reality. As an example, there are many programming languages that do not have a distinction between instance data and model data. Languages like Lisp are like this, where everything is just data, there is not distinction between code and data.
The idea of “code is data” applied to DL is equivalent to saying that the DL architecture are representations that can be learned. We as humans require the concept of a meta meta-model to get a better handle of the complex recursive self-describing nature of DL systems. It would be interesting know what the language of the meta meta-model should look like. Unfortunately, if this language is one that is learned by a machine, then it may likely be as inscrutable as any other learned representation. See: “The Only Way to Make DL Interpretable”.
It is my suspicion though that this meta meta-model approach if pursued in greater detail may the key in locking “Unsupervised learning” or alternatively “Predictive learning”. Perhaps our limited human brains cannot figure this out. However armed with meta-learning capabilities, it may be possible for machines to continually self improve upon themselves. See “ Meta-Unsupervised-Learning: A supervised approach to unsupervised learning” for an early take on this approach.
The one reason that this may not work however is that the vocabulary or language that is the is limited (see: Canonical Patterns) and therefore “predictive learning” is not derivable from this bootstrapping method. Meta-learners today discover can only the weights and the weights are just parameters of a fixed DL model. A discovery, even through evolutionary methods, can only happen if the genesis vocabulary is at the correct level. Evolution appears to be a Meta Metal-Model process.
There is plenty that is missing in our understanding of the language for the meta meta-model of DL. Perhaps we can discover this only if we work up the Capability levels of Deep Learning intelligence. DARPA has a program that is researching this topic “DARPA goes ‘Meta’ with Machine Learning for Machine Learning”. I hope to refine this idea over time.
“DARPA goes ‘Meta’ with Machine Learning for Machine Learning”.
See Deep Learning Design Patterns for more details or visit “Intuition Machine” to keep abreast about the latest developments.
For more on this, read “The Deep Learning Playbook”
| The Meta Model and Meta Meta-Model of Deep Learning | 271 | the-meta-model-and-meta-meta-model-of-deep-learning-10062f0bf74c | 2018-09-01 | 2018-09-01 17:08:04 | https://medium.com/s/story/the-meta-model-and-meta-meta-model-of-deep-learning-10062f0bf74c | false | 1,162 | Deep Learning Patterns, Methodology and Strategy | null | deeplearningpatterns | null | Intuition Machine | intuitionmachine | DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DESIGN PATTERNS | IntuitMachine | Machine Learning | machine-learning | Machine Learning | 51,320 | Carlos E. Perez | Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc. | 1928cbd0e69c | IntuitMachine | 20,169 | 750 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-27 | 2018-04-27 06:36:48 | 2018-04-27 | 2018-04-27 06:41:01 | 0 | false | en | 2018-04-27 | 2018-04-27 06:41:01 | 0 | 1007d0d6ab91 | 2.528302 | 0 | 0 | 0 | 1. Get Executive Ownership | 3 | Top 10 Tips for the Data Science Team To Succeed
1. Get Executive Ownership
One of the important contributing factors to any project is getting executive buy-in. It is your job as a data science software manager or project manager to get your executives to believe for your mission. Without them, your project will not cross on.
2. Gain the trust of your peers
Many of the managers don’t consider their information. They need new dashboards, data science teams, the complete nine yards. If you can’t even consider your statistics. Prefer a quote from Sherlock Holmes who said about how statistics is the foundation for the constructing blocks of wondering. If that is real, and you don’t consider the residence you have got built. It will fall on pinnacle of you. Get your managers to agree with you and your facts!
3. First implement a simple project successfully
Everyone desires to broaden the following Google or Facebook set of rules. If your team is simply beginning out and also you need them to be successful start small. Once you get that first win under your belt. Executives could be begging you to help them with everything. Then you’ll need to paintings on making sure your projects are bombarded by means of requests all the time, or at least, simplest the proper projects are being worked on.
4. Standardize your data science procedures
Data science has quite a few cool technology and tools that permit for extremely good perception. However, like software engineering, even with all of the cool matters you could do. Without techniques, you may fall in the back of projects, make terrible products and fail to maintain finish initiatives. This manner you need to report your approaches. It seems like a waste of time, till you begin having inner breakdowns of tasks.
5. Play nicely with different departments
Every commercial enterprise is a team game. You have accounting, finance, operations, sales and all the other departments that your team desires to work with. They all commonly have their very own data warehouse and also you want that information! If you’re fortunate, there is one primary crew that manages all the databases. Even if that is authentic. I still need to get the information from a couple of groups. In addition, all those teams will likely want to have a few necessities in your tasks. So make certain to play nice.
6. Build a prototype first for early purchase-in
Build a prototype (positive, in python)! Show your team and your supervisor what it can do. People want motion, not just theories and phrases. Set up a prototype, if you may, get actual information. If you can’t then pump it with some statistics but make sure the functionality is there. Make it tangible, interactive, and actionable!
7. Design for robustness and maintainability
We can’t pressure this sufficient. Make sure something dashboard you build, technique you place in the vicinity, or algorithm you broaden is maintainable. If you go away the enterprise the next day. Will the project still work? Seriously! People will in case you left at the back of no documentation, and never shared your code.
8. Get a Data Science Guide
There is quite a few statistics technology consulting businesses with a view to increasing a facts technological know-how manual of right enterprise practices to your team. This would require they check your team’s modern-day status and work with them to recognize where they might be greater powerful. Often instances that is skipped by maximum teams, so it is beneficial to usher in outside assistance.
9. Collect a lot of smooth facts as possible
Data comes from all exceptional sources. You can get it from inner warehouses, outside APIs and pretty much anywhere. Gather as tons of it as you can, and ensure it’s far managed and clean.
10. Make a decision, give an actual opinion
As a data scientist, you have power. You have data that means you can make conclusions with confidence.
| Top 10 Tips for the Data Science Team To Succeed | 0 | top-10-tips-for-the-data-science-team-to-succeed-1007d0d6ab91 | 2018-04-27 | 2018-04-27 06:41:02 | https://medium.com/s/story/top-10-tips-for-the-data-science-team-to-succeed-1007d0d6ab91 | false | 670 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | jessica jessy | null | 24175eae4140 | IQOnlineTrainin | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-08 | 2018-04-08 15:48:57 | 2018-04-08 | 2018-04-08 17:14:21 | 1 | true | en | 2018-04-09 | 2018-04-09 15:25:23 | 3 | 100a2247898 | 2.739623 | 31 | 1 | 0 | Last Friday the movie “Do you trust this computer” by Chris Paine was launched (free to watch until the end of Sunday, April 10). It is a… | 5 | Don’t trust “Do you trust this computer”
from http://doyoutrustthiscomputer.org/watch
Last Friday the movie “Do you trust this computer” by Chris Paine was launched (free to watch until the end of Sunday, April 10). It is a documentary that deals with the potential consequences of Artificial Intelligence (AI), and repeats once more Elon Musk’s often quoted warnings about the dangers of AI. In fact, a representative for Elon Musk has confirmed that Musk is bankrolling the movie’s free online release.
Unfortunately, even though it displays an impressive list of experts, the overall message is too biased and one-sided to be trusted.
In short, it is a dangerous distraction from the urgent need to act on achieving Responsible AI now!
And here are some other reasons:
It is unclear who the makers expect as audience. It is too alarmist and dystopic a general audience, scary even. If the aim is participatory AI, and ensure everyone’s commitment, then such a scary message will just achieve the opposite. It is time to act, not to scare. To look for solutions and work together across disciplines to get “AI for good”. This is not helpful. Great, great missed opportunity. It is time for Responsible AI, and this includes using proper narrative and frame the problems correctly.
The lineup of experts is impressive, including several of my own ‘heroes’. However of the 26 experts listed in the movie’s website only 3 are women. This is a great missed opportunity for the film. There are many highly qualified female AI researchers and professionals, with equally, or even more, impressive contributions to the field as the experts interviewed. But most importantly, this leads to a skewed, biased, view of the field (see point 3.). A better representation of different views, multidisciplinary, multidimensional, gender and culturally balanced, would have led to a better narrative more balanced about risks and benefits of AI.
In order to deal with the impact of AI, about which the documentary is so concerned, is exactly to ensure, enforce and demand participation, inclusion and diversity.
The absurd underlying message that superintelligence is about winning.
True intelligence is about social skills, about collaboration and contribution to a greater good, about getting others to work with us in order to survive and prosper. There is no reason to expect superintelligence (if at all possible, see point 4) will be different. I suppose that this obsession with ‘winning’ is a male thing, specially the generation of the men appearing in the movie who grew up play war-like games… But as a message this is unethical. Just shows the need for all of us to stand up for participation, inclusion, diversity in AI now!
General Artificial Intelligence and narrow AI are very, very different. The movie makes a mess of this, inexplicable given the quality of the experts. We already have many real applications of narrow AI. But intelligence is not a one-dimensional thing nor a cumulative one. It is not by improving on one application of AI or by combining many different narrow AI systems that will get us to artificial general superintelligence. Moreover, intelligence is not just about knowing, is about feeling, enjoying, pushing limits… I often run marathons. I don’t doubt that is possible to build a ‘running robot’ but will it ever experience, and enjoy, what means to run a marathon, to push through the pain and enjoy it ?
The “Terminator”. Really, guys??? Are you expecting anyone takes this serious? Such a “Terminator” view on AI is misleading and unhelpful. An ethical approach to AI also means to ensure a correct view on its capabilities and to increase public awareness. I start seriously wandering if this fixation by tech corporations on dystopic views of the future are now a way from them to move public attention away from their practices and avoid regulation and corporate responsibility? Less “terminator” and more participation and inclusion is need. This too is AI ethics.
The movie is far too long, repetitive, boring even. The message “Responsible AI” deserved much better.
| Don’t trust “Do you trust this computer” | 233 | dont-trust-do-you-trust-this-computer-100a2247898 | 2018-04-19 | 2018-04-19 07:14:14 | https://medium.com/s/story/dont-trust-do-you-trust-this-computer-100a2247898 | false | 673 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Virginia Dignum | null | fb01d0a3bc3f | virginiadignum | 72 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-05 | 2018-01-05 22:34:12 | 2018-01-05 | 2018-01-05 22:42:18 | 1 | false | en | 2018-04-24 | 2018-04-24 17:09:10 | 1 | 100a259bd314 | 6.256604 | 2 | 0 | 0 | It is important to understand where we are to see what the future holds. We live in a time of hedonism, what people call a hookup culture… | 5 | The Cultural Revolution: Robots and Trust
It is important to understand where we are to see what the future holds. We live in a time of hedonism, what people call a hookup culture. Of disposable relationships. The most likely outcome is for the hookup culture to evolve. To a society is a similar situation Japan finds itself now. With the commodification of relationships. People will burn out, once this has run its course. Humans will have to connect based on connection and the want for children, as everything else has been parsed for profit or commodified.
There are two things that have and will continue to hinder the hookup culture to this point, pregnancy and rape. While contraceptives have mitigated unwanted pregnancies, they are not fully effective. As to the latter subject, well, there likely isn’t a solution to that.
So what happens when we add robots?
Robots and the Black Market
It is important to tackle this issue as this will be one of the main reasons for the introduction of pleasure robots. Pandora’s box will likely be open because of this and it will also cause a major societal shift.
The first major landmark ship shift will likely be pleasure robots.
The sex trade a multi-billion dollar industry. It commodifies everything about human relationships. It is important to understand it exists because people want it and it is illegal. While the hookup culture has devalued the price for sex it has also made it easier for the black market to go undetected. The black market makes a profit off the sex trade by indentured servitude. Individuals are forced through debt, violence and addiction to continue hooking. Illegal operations have high costs both implicit and explicit.
Robots will be introduced to combat this illegal market. Robots offer many advantages they do not need food, water, sleep, and do not become pregnant. More importantly, they are not human, which means they can be considered property. Which, can be legally owned and mass produced. Mass production will cause the further commodification of sex. Which in-turn will likely drop its price to virtually zero as availability increases.
This will force the illegal trade to do either compete on a comparable price point, try to compete on another level, or go out of business. Given the costs stated previously much of the market will dry up unless a caste system occurs. While there may be a higher end market it will be a small niche compared to the behemoth it is currently.
Thus legalized robots will irrefutably damage the sex trade. However, the bigger consequence will be the commodification of sex.
Flipping Culture on its Head:
Widespread pleasure robots may kill the pornography industry if it does not evolve fast enough. While the industry has tried to adapt to the internet, few will continue to pay as the legalized widespread access of robots becomes available. The pornography industry will survive however if it adapts to Augmented and virtual reality, especially if sense integration occurs.
Religions, particular Christians will be torn on this issue. I mention Christians specifically because their religion is what Western society stands on, due to its values, laws, history etc. Is it adultery, if it isn’t human, let alone alive? Is a man or woman still chaste if only a robot has been involved? These and many other moral questions will need to be answered.
The opening of Pandora’s box of robots, however, will eventually kill the hookup culture. As the prevalence of robots makes human interaction irrelevant. Men in particular who are disenfranchised by the current culture often drop out will now have an outlet. These individuals will be the first and will likely proselytize their lifestyle. Those who may respond with snark, I will remind that this is common in Japan. What starts as a taboo often becomes a societal norm.
This will not be without its consequences. The more men who switch from traditional relationships to mechanical ones leaves more women without partners. While some may be filled by mechanical ones undoubtedly it will not be at the same pace. This due to women’s higher desire for connection than men, and men’s higher desire for sex. One of which is easily fulfilled, the other much harder. Thus a glut of women will likely compete for a smaller amount of men. With more options, men will become more selective. To compete for a mate a war of escalation will occur. This combined with technology needing for a culture of trust will increase the likelihood women will return to chastity until marriage.
The legalization of these robots could, potentially prevent societal collapse and violence. As men and women who are unable to form social connections would now have an outlet for their unfulfilled needs. Individuals who have dropped out of society can now integrate to some degree. As the idea of pleasure robots becomes less stigmatized more people will begin to replace real relationships with robots. Robots will evolve to fulfill these new roles. Which will cause the further decline of relationships, marriage and birthrates.
Children and Robots:
Male birth control methods are primitive at best. New innovations that reduce the downsides will increase the deliberate action to have children. Unplanned children will drop dramatically. This will also reduce potential “gold diggers”. Children will be a deliberate choice, by both parties.
Thus birthrates will plummet. Far more then anyone can possibly imagine. People will call it the end of humanity, however, they lack vision. Artificial wombs may be the answer as humans can be created without the need for a female host. The other answer resembles a Margaret Atwood novel due to a crypto caste system.
Artificial wombs may also cause the idea of children to be more thought out, given the likelihood of genetic engineering. Children will be altered for optimum health. Mate selection in the future is quite likely to be based on genetics. Artificial wombs will likely lessen the maternal instinct of women and her potential child. Which will make the child more of a commodity than a unique being, due to less of a psychological attachment. Infanticide may increase due to this lack of connection.
Maternity, as we know of today, may be reserved for the rich. As the expense of rearing a child normally will be extremely high when compared to an artificial womb. It is entirely possible a system of sperm banks, genetic engineering and artificial wombs for adoptive parents emerges. One that selects for genetic diversity. This to reduce the likelihood of mass extinction while removing undesirable traits.
The Outcome:
Personal robots will be first reserved for the wealth due to the newness and complexity. So robots will operate in a manner similar to medieval brothels. Designated areas, cordoned off, especially if Artificial Intelligence continues to grow at the speed of Moore’s Law. Also due to the cost of ownership and maintenance. This will eventually move towards mass ownership as costs come down and the cultural stigma subsides. Crimes regarding adult prostitution and sexual violence will likely drop. Poverty may increase as prostitution becomes a less viable way to earn money.
The culture will eventually accept albeit begrudgingly the robots as they move from pleasure to romantic companion. Women will likely choose chastity if the above situation occurs, even if they have the full cultural, legal and independent rights to do otherwise. But why? A war of escalation. While the majority may live their lives how they like, a small group will counter the rise of robots by doing the opposite of what the group does.
This group will be more valued due to its rarity like all things, especially given the abundance of pleasure. This group will be hated however they will be more successful in obtaining relationships, all else equal. As a result, a cultural movement will emerge.
In return, these women will want longer courtships as they only have one chance due to their choice. Men will agree, as pleasure is bountiful. Courtship will occur again, as the value of relationships takes on new meaning. A culture of trust. Similar to the Victorian era will develop. Courtship and chastity will be normal. Chaperones of robots or via the Internet of things is entirely possible. Trust and in turn honour will be valued. This due to the changing nature of relationships and the irreversibility of cryptographic transactions. Who you spend your time with, personally and in business, matter more.
The irreversibility of cryptocurrency transactions will impact business. Marriage unions, the joining families to secure alliances and business ties may also occur once again. It sounds ridiculous now, but what is the likelihood you would rip off your family? I bet less likely than some random stranger.
Lineage and Dynasty two words very uncommon today will likely make a resurgence into the public consciousness and lexicon.
Conclusion:
It is difficult to predict the full extent of how robots will change human society. This article offers a brief glimpse into what it may look like. Robots will likely speed up the current culture of hedonism, which will cause an eventual reversal. There will be three main robots that will change human society, pleasure robots, companion robots and artificial wombs.
One thing is for certain they will be in every part of our lives, ubiquitous. It will often change unexpected things in ways we did not expect. It is possible men and women play more of an active role in courtship. With the ideas of connection and children being at the heart of a relationship. If so this would create more stable and longer lasting relationships.
| The Cultural Revolution: Robots and Trust | 3 | the-cultural-revolution-robots-and-trust-100a259bd314 | 2018-04-24 | 2018-04-24 17:09:11 | https://medium.com/s/story/the-cultural-revolution-robots-and-trust-100a259bd314 | false | 1,605 | null | null | null | null | null | null | null | null | null | Sex | sex | Sex | 23,511 | A.l. | Persuader. Futurist. Blockchain. Sovereign Individual. https://twitter.com/Kairon01 | 8939e3e1c0ae | Kairon | 39 | 8 | 20,181,104 | null | null | null | null | null | null |
0 | from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
from datetime import datetime, timedelta
from selenium import webdriver
import time
https://sites.google.com/a/chromium.org/chromedriver/downloads
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument("--test-type")
options.binary_location = "Your drive:\Your directory\chromedriver.exe"
driver = webdriver.Chrome("Your drive:\Your directory\chromedriver.exe")
my_url = 'https://www.facebook.com/'
driver.get(my_url)
login = driver.find_element_by_id('email');
senha = driver.find_element_by_id('pass');
login.send_keys('your user')
senha.send_keys('your password')
submit_button = driver.find_elements_by_xpath('//*[@id="loginbutton"]')[0];
Now is just submit the information using the click method and <i>voilà</i>, we're in.
submit_button.click()
| 10 | 32881626c9c9 | 2018-09-24 | 2018-09-24 00:20:04 | 2018-09-24 | 2018-09-24 00:21:16 | 1 | false | en | 2018-10-03 | 2018-10-03 14:04:42 | 4 | 100c041c1bf8 | 2.079245 | 4 | 0 | 0 | Hi everybody , this little snippet will show you how to use a selenium lib in order to make an automated web scraping you can use to… | 5 | A Little Snippet to Automate Web Scraping using Python and Selenium
“grayscale photo of dew on spider web” by Rúben Marques on Unsplash
Hi everybody , this little snippet will show you how to use a selenium lib in order to make an automated web scraping you can use to analyse data, find patterns,etc.
This snippet is the first of many other, each one will show you the next step, this one shows the automated connection in a web page, in this case, the facebook, the next will show you how to scraping a web page using beautiful soap, after we’ll to download the data and keep it in a database and so on.
According with the documentation, the selenium package is used to automate web browser interaction from Python and used to make automated tests.
You can find more information in https://pypi.org/project/selenium/
Several browsers/drivers are supported (Firefox, Chrome, Internet Explorer), as well as the Remote protocol.
Supporte Python versions: Python 2.7, 3.4+
For the installation you can use on of this 3 options:
using pip:
pip install -U selenium
You can download the source distribution from PyPI (e.g. selenium-3.14.0.tar.gz), unarchive it, and run:
python setup.py install
Finally if you’re using Anaconda:
conda install -c conda-forge selenium
The first thing we need to do is to import the libraries we’ll use in this snippet.
In this case, for this first step, the most important one is the selenium where we’ll make the automated connection.
In [38]:
After importing the libs, in order to run the code, we need to choose the correct driver to use in.
Selenium requires a driver to interface with the chosen browser.
Here, we’ll use the Chromium, but many other can be used.
you can find the driver here:
you can find more information in the Selenium project’s page.
With Chrome driver installed, we need to set some option in order to run it.
In [39]:
Next step is to set the url we’ll use and get it with the driver.
In [40]:
In [41]:
In our case we have a form to fill in order to access the web page, so we need to get the html ids from the respective fields. It’s easy to find them using the driver methods like find-element_by_id or find_elements_by_xpath.
In [42]:
In [43]:
In [46]:
In [ ]:
In [47]:
In the next topics we’ll learn how to get the data using beautiful soap, store it in a database and analyse it using some tools like pandas, matplotlib, sklearn, etc.
Enjoy the code, improve it if you want!
See you!!!!
| A Little Snippet to Automate Web Scraping using Python and Selenium | 11 | hi-everybody-this-little-snippet-will-show-you-how-to-use-a-selenium-lib-in-order-to-make-an-100c041c1bf8 | 2018-10-03 | 2018-10-03 14:04:42 | https://medium.com/s/story/hi-everybody-this-little-snippet-will-show-you-how-to-use-a-selenium-lib-in-order-to-make-an-100c041c1bf8 | false | 498 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Python | python | Python | 20,142 | Alexandre Dall Alba | null | f46e6d397edf | alexandredallalba | 14 | 16 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | dee560a00777 | 2018-08-02 | 2018-08-02 23:14:38 | 2018-08-03 | 2018-08-03 03:36:38 | 1 | false | en | 2018-08-03 | 2018-08-03 04:14:59 | 3 | 100c15ab725a | 1.309434 | 0 | 0 | 0 | Our new Unleash live Release Cozumel (v1.15) has just arrived with many new features and some bug fixes. | 4 |
Product Release Wrap-up July
Our new Unleash live Release Cozumel (v1.15) has just arrived with many new features and some bug fixes.
Take off for a flight at https://cloud.unleashlive.com
Here is the detailed run down:
Enhanced Features:
HD live video streaming latency decreased by another 20%. Benchmarking shows we are now about 50–80% faster than typical Youtube or facebook live stream. Read here for more details
Refreshed A.I. live in-stream UI overlays.
Added in-stream video A.I. object count analytics.
3D Modelling jobs allowance increased from 250 to up to 500 images on all Business subscriptions. Contact us for even higher allowances.
Full screen Point cloud and 3D Model view for more immersive showcases.
Point cloud tools menu updated with enhanced measurements and rendering options.
Additional browser theme options for rich charcoal titanium background or bright white for 3D models.
More fluid touch and mouse interaction for 3D models to inspect any model location. Pan/tilt, pinch/zoom, rotate.
For even faster browser navigation, we added advanced model controls, enabling different lighting for models and low, med, high resolution of models.
Quick view of latest media library items.
Expanded inventory of user guides with detailed workflow steps and Youtube videos.
Enhanced sharing functionality of VR models.
Several new A.I. inference models from various 3rd party developers available for testing in connected HD live streams. This is still an experimental feature. For example: Improving track inspections with automation.
A.I. developer sandbox features updated.
Bug Zapper:
Several users reported issues with Google sign-in on older Chrome browser versions.
Some users reported issues with lack of thumbnails on older Safari browser.
Linked Unleash live Youtube user guides sometimes did not start playing with certain privacy settings in Chrome.
Several Android 6 and iPhone 11 stability fixes.
| Product Release Wrap-up July | 0 | product-release-wrap-up-july-100c15ab725a | 2018-08-03 | 2018-08-03 04:14:59 | https://medium.com/s/story/product-release-wrap-up-july-100c15ab725a | false | 294 | Unleash live is a cloud based software platform ingesting live video and imagery, applying real time A.I. analytics and delivering instant decision making capabilities | null | UnleashLive | null | Unleash live Blog | unleash-live-blog | ARTIFICIAL INTELLIGENCE,REAL TIME ANALYTICS,COLLABORATION TOOLS,DRONES,3D MODELING | Unleashlive | Hd Live Streaming | hd-live-streaming | Hd Live Streaming | 1 | Unleash live | Powerful A.I. live stream for faster decisions | 86550e962a4f | unleashlive | 6 | 13 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-23 | 2018-03-23 17:12:10 | 2018-03-01 | 2018-03-01 11:16:51 | 0 | false | en | 2018-03-23 | 2018-03-23 17:12:21 | 4 | 100c470f532b | 2.350943 | 1 | 0 | 0 | AI Saturdays , a global event conducted by AiDevNepal has been very successful till date. It has been heading forward with the motto “Learn… | 1 | AI Saturdays by AiDevNepal : A Review from participant by Raisha Shrestha
AI Saturdays , a global event conducted by AiDevNepal has been very successful till date. It has been heading forward with the motto “Learn , Share and Grow together”. It is a great opportunity for the learners to be a part of AI Saturdays and learn informative things related to Artificial Intelligence (AI) from the well experienced mentors who are professional AI developers of Nepal. AiDevNepal has taken a great step by taking initiation to conduct this global event “AI Saturdays “ in Nepal. AIDevNepal has enlightened a number of AI enthusiasts by giving them opportunity to get involved in these workshops of AI Saturdays.
I myself being a member of the workshop would love to share the experience I gathered. The first workshop involved interaction and knowledge sharing from well known experienced professionals in the field. In the later workshops we learnt about basics of AI , tools used for AI implementation, Basic Libraries and functions. Then in next workshops we learnt implementation of AI . We are in the process of learning. We learnt implementation of number of things like decision tree, deep learning and so on. We dealt with examples which fall under these categories. We got sound knowledge regarding topics which we had only surface level information. I am glad I got to be a part of these workshops and learnt this much of stuff.
From very surface level, we are rising a step ahead in each workshop. This is making us very enthusiastic to learn more in the field of AI. As a result of this enthusiasm we are working on our project in AI as a assignment given to us by our mentors even on 1st March, when Holi is celebrated in Nepal. Instead of playing Holi people are working out in their codes to get more accuracy in their AI projects. This is great interest development. And AiDevNepal deserves a round of applause for being able to enlighten people with knowledge of AI and making them more enthusiast in the field.
Till date 6 workshops have been conducted along with 2 interactive AI meetups. A number of workshops are yet to come and all of us are very excited to learn further. The organising team always encourage us to learn, share and grow together. So the entire team of the AiDevNepal including the organisers, and participants share a lot of knowledge with each other. We discuss regarding our confusions , share the discoveries or helpful tutorials found in our facebook group “DN: AI Developers Nepal” or “AiDevNepal”. In this way we actually learn , share and grow together.
The day when 14 workshops of AI Saturdays will be completed will be a day of pride for all of us. We participants will always try to share the knowledge gained from AiDevNepal by being associated with AiDevNepal itself. We shall try to make the motto of “Learn, Share and Grow together” by very much implementing it and making AI successfully established in Nepal some day. As a very good initiation has already begun and a number of enthusiasts are getting enlightened, that day is not too far. Cheers to AiDevNepal for this great initiation.
AiDevNepal has prepared a number of materials for the workshop which is also in their github link mentioned below. Everyone are free to use the material but are requested to give reference to AiDevNepal whenever the material is used for the purpose of knowledge sharing. You can also subscribe to AiDevNepal in youtube and watch informative videos related to AI.
Website of AiDevNepal : https://aidevnepal.github.io/
Github Link : https://github.com/AiDevNepal
Youtube Channel Link: https://www.youtube.com/channel/UChk69vbMbxBPRutHpcDfe0Q
Originally published at medium.com on March 1, 2018.
| AI Saturdays by AiDevNepal : A Review from participant by Raisha Shrestha | 1 | ai-saturdays-by-aidevnepal-a-review-from-participant-by-raisha-shrestha-100c470f532b | 2018-05-05 | 2018-05-05 00:38:29 | https://medium.com/s/story/ai-saturdays-by-aidevnepal-a-review-from-participant-by-raisha-shrestha-100c470f532b | false | 623 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | AIDevNepal | An Artificial Intelligence community in Nepal. | 902f6a19bfdb | aidevelopersnepal | 44 | 31 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-10 | 2018-05-10 06:58:52 | 2018-05-10 | 2018-05-10 06:59:24 | 0 | false | en | 2018-05-10 | 2018-05-10 06:59:24 | 1 | 100d3c69c32c | 0.067925 | 0 | 0 | 0 | https://www.linkedin.com/pulse/age-theory-over-machine-learning-emerges-sam-ghosh/ | 3 | Is the age of theory over as Machine Learning emerges?
https://www.linkedin.com/pulse/age-theory-over-machine-learning-emerges-sam-ghosh/
| Is the age of theory over as Machine Learning emerges? | 0 | is-the-age-of-theory-over-as-machine-learning-emerges-100d3c69c32c | 2018-05-10 | 2018-05-10 06:59:25 | https://medium.com/s/story/is-the-age-of-theory-over-as-machine-learning-emerges-100d3c69c32c | false | 18 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | Sam Ghosh | Founder at Wisejay Private Limiter and SEBI Registered Investment Adviser | 338669bc3e3f | samghosh | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-06 | 2018-01-06 04:00:43 | 2018-01-07 | 2018-01-07 07:20:10 | 9 | false | en | 2018-01-08 | 2018-01-08 05:28:09 | 4 | 100deb28f54d | 5.70566 | 17 | 0 | 0 | Artificial Intelligence (AI) is currently one of the most popular topics in the industry with seemingly endless applications in everything… | 5 | AI based UI Development (AI-UI)
Artificial Intelligence (AI) is currently one of the most popular topics in the industry with seemingly endless applications in everything from matchmaking to self-driving cars. The most disturbing aspect of AI that we hear is it will result in massive job losses across industries. Can AI also affect the IT jobs? If so which skills will be impacted? When? How? These are some questions every software engineer must be seeking.
Creative designers or business users comes up with UI (User Interface) ideas for application/ website on a sheet of paper or on a whiteboard or on their fancy graphics tablet. It is a job of an UI developer to convert the design idea/ wireframes into a working UI keeping the creative design intent in mind. This is one of the complex, time-consuming step in software development process. In this article, we will see an interesting example of applying AI for UI development. We will try to understand this by comparing it with human learning process and (over)simplifying the technology behind it.
Typical hand drawn design for a UI
Mimicking our eyes and brain
As a child, we learn to observe and label the things around us. The learning happens through feedback provided by our parents and others. Our brain gets trained to look for some pattern, texture, color, size in a object to identify it. In AI, Convolutional Neural Network (CNN) is a class of deep neural network very effective at recognizing the objects in a given image.
Basic idea behind CNN is to look for some shapes or patterns with the help of various filters in small parts of the image one at a time. Below figure shows applying 2 filters to look for slanted lines. Based on the filter results features are extracted. Finally by voting for the extracted features, the algorithm can conclude on the objects in the image.
Describing the image
The child starts uttering a single word label for each identified object, such as ‘ball’. Soon she will also learn to identify the relationship between the identified objects and describe it in a short sentences such as ‘a red ball and a brown bat is on the lawn’. The learning happens through a cycle of trial and errors.
In AI, for a given image constructing sentences from the word labels is a job of LSTM (Long Short Term Memory) networks. This process is called as image captioning.
Below are some examples of AI based image captioning. More such examples are at http://cs.stanford.edu/people/karpathy/deepimagesent/
The image captioning is achieved by appending LSTM network to the CNN discussed earlier. LSTM is very effective in language related tasks, because of their unique property of referring to their previous outputs. LSTM generates a word at a time. The next word is decided based on it’s inputs, but also on previous words generated. e.g. in a sentence ‘My name is John.’, you can say ‘John’ only if earlier three words were ‘My name is’. The sequence of words forms into a sentence. Like any other neural network, LSTM goes through a learning at building the sentences.
UI Development Process
Typically UI development happens through following steps,
Creative designers or business users of the application likes to hand draw their UI design ideas on a whiteboard or a graphic tablet or even a piece of tissue paper.
Designer uses wireframing tool on a computer to create the same design again. This is a redundant step.
UI developers will translate the wireframes into a working UI code. The developers and designers goes through a iterative process till the expected UI is built. This step is a time consuming and repetitive process.
AI based UI development
What if the hand-drawn design idea is directly translated to a working UI? AI can do this. Below is an example of the same.
UI generated with pix2code
In image captioning, AI describes objects (such as dog, horse) in a scene and builds a English sentence describing the objects and their relationship with each other.
In case of UI code, the UI design is like a scene, but instead of dog and horse will have UI objects like button, slider. Instead of English language, the objects will be described in UI code. The UI code is having a limited vocabulary (such as button, slider), and relationship between objects are described with few more words (such as position, hierarchy). Thus UI code generation can be considered specific use case of image captioning.
UI code generation goes through two stages.
Training Stage:
Imagine a child (child_1) learning to look at many UI images and creating a list of the UI objects for each UI image. Other child (child_2) learns to read the descriptive code for the same UI. Third child (child_3) learns to find the relationship between the child_1 and child_2’s learning. They together learn to observe a image and create a corresponding UI code.
CNN takes role of Child_1, LSTM as Child_2 and another LSTM as Child_3. (For a complete technical explanation, refer the link for pix2code paper at end of the article.)
Sampling Stage:
The trained model is now ready to process hand drawn GUI drawing. The code context is updated for each prediction to contain the last predicted token. The resulting sequence of DSL tokens is compiled to the desired target language (e.g. for android, iOS, HTML etc.) using traditional compiler techniques.
Benefits of AI-UI
For designers and developers, AI based solution would save critical time early on a project by rapid prototyping, boost iteration cycles, and eventually enable the development of better apps.
They will save on all the trivial, repetitive and redundant tasks.
It also will allow designers and developers to focus on what matters the most that is to bring value to the end-users.
The entry barrier to build apps will become really low. Learning to use a UI design tool takes time, learning to code takes even more time. However everyone can draw UI on paper. This will allow your grandma to go from an idea to a working UI running on her phone in a matter of seconds.
Current and future state
As of now only few AI based UI development products (e.g. Uizard) are getting developed and not yet reached maturity to replace the human UI developers. But still they are good as an assistant for any UI developers. In coming years, we may see new approaches and improved AI products, where this assistant will take over the role of the experienced UI developer. It’s time for UI developers to look at the changing trends and get ready for Reskilling.
Still many of us may think generating UI code from the creative designers drawings is OK, but AI itself cannot come up with it’s own creative UI designs. We still need artists, creative designers, Right? Maybe wrong! AI has Generative Adversarial Network (GAN) and Creative Adversarial Networks (CAN) have proven to generate art and sometimes better than humans. We will discuss this in some other article.
References
pix2code: Generating Code from a Graphical User Interface Screenshot by Tony Beltramelli https://arxiv.org/pdf/1705.07962.pdf
Deep Visual-Semantic Alignments for Generating Image Descriptions by Andrej Karpathy, Li Fei-Fei http://cs.stanford.edu/people/karpathy/cvpr2015.pdf
| AI based UI Development (AI-UI) | 195 | ai-based-ui-development-ai-ui-100deb28f54d | 2018-06-19 | 2018-06-19 09:24:51 | https://medium.com/s/story/ai-based-ui-development-ai-ui-100deb28f54d | false | 1,194 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Vijay Betigiri | null | 6139f9655848 | vijay.betigiri | 38 | 13 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-17 | 2017-11-17 22:46:20 | 2017-11-18 | 2017-11-18 23:06:38 | 2 | false | en | 2017-11-27 | 2017-11-27 16:43:52 | 2 | 100e4578c9fc | 3.205975 | 5 | 0 | 0 | The music industry is a business theorist’s dream. It is a great case study, because unlike many industries, it faces almost all the… | 5 | Grow your learning @ incentivetheory.com
Can Technology Replace Record Companies?
The music industry is a business theorist’s dream. It is a great case study, because unlike many industries, it faces almost all the possible issues a market could face (regulatory hurdles, high visibility, innovative, fast paced, low barriers to entry, high barriers to sustainability). It reminds me of an internet accelerated version of the 1980s version of the disk drive industry.
In the wake of my recent article, I have had a few people share articles about the emergence of the company United Masters from stealth mode. For those of you that don’t know, before I started at Sonos, I briefly worked on an application that competed in a similar space. From this experience, I learned that music analytics is a very competitive space with a laundry list of modularized niche suppliers (Swift & Next big sound for streaming analytics, MAX for paring artist for social analytics, WAVO for tour ads). Things to consider…
The Customer Experience
As noted in my last article, the record labels’ customer is the artist, not the music listener. A company’s customer experience really matters. Taking out all the inefficiencies in the supply chain might actually affect a record company’s ability to win business. If an independent artists gets big enough, they don’t want to be treated like a commodity user of a technical platform; Artists and their teams are willing to pay for a luxury experience, especially if the experience comes at the expense of uncertain future gains.
People are Loss Averse
In economics, it is known as the Prospect Theory. Prospect Theory states that people are loss averse, meaning people perceive losses more strongly than gains. If a small artist or team doesn’t go with a big label now, they may not have the opportunity to go with them in the future. A potentially big loss if the artist fails. If United Masters can quantify and mitigate the monetary value lost if an act succeeds, they could persuade some managers to adopt the risk and cost associated with staying independent. Clearly articulating a value proposition for theoretical future gains is difficult.
Conflict of Interest
In management theory, it is known as the Realtor Effect. If a realtor is selling a house, they only get a small percent of the sale. Instead of fighting for incremental gains for the seller, the Realtor’s time is better spent finding other houses to sell, despite whether the extra work is in the seller’s best interest. People on the business side of an artists career are incentivized to have artist’s work with a larger record company, because their dollar per hour inputed is more attractive. The business person is required they spend less time working with an act, but still sees attractive financial returns. The business person’s time is better spent finding and signing new acts.
Platform
It’s extremely difficult to build a platform based on assets you don’t own. Surviving in a company’s supply chain is difficult, because you are ultimately beholden to the owner of the content. United Masters doesn’t own the Streaming Services’ data or distribution network.
To The Point
If someone is going to disrupt the market with a low-end disruption, it would be an industry insider like United Masters’ founder. There are two reasons here…
Longtime industry insider is the only person familiar enough with the inefficiencies that are actually crucial to win business.
Low-end market disruption work in a B2B environment. Industry insiders know how to speak to the decision makers on the business side of an artist’s career, which makes them more qualified to explain the value proposition to them.
Conclusion
If you read my article closely, you’ll notice I don’t take a stand on the viability of United Masters. I don’t think asking the question “Do you think the company will succeed?” is the right question — I can’t tell you if the company will succeed. I can tell you how the incentives work. However, just because the incentives line up for the company does not mean they will succeed. But if a company can understand the incentives, they can make the right decisions, but that is half the battle. The ability to design and implement creative solutions to leverage these incentives is what separates success from failure.
If you enjoyed, Don’t forget to click and hold the 👏 so other people can find the article. Incentive Theory is a publication that focuses in data science and direct to consumer strategy.
| Can Technology Replace Record Companies? | 109 | can-technology-replace-record-companies-100e4578c9fc | 2018-03-17 | 2018-03-17 18:50:09 | https://medium.com/s/story/can-technology-replace-record-companies-100e4578c9fc | false | 748 | null | null | null | null | null | null | null | null | null | Music | music | Music | 174,961 | Justin Hilliard | null | 96578e045ed6 | justinhilliard | 137 | 112 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-16 | 2018-04-16 19:52:47 | 2018-04-17 | 2018-04-17 05:56:00 | 4 | false | en | 2018-04-17 | 2018-04-17 06:04:58 | 12 | 100fd9450b25 | 5.364151 | 0 | 0 | 0 | During this year’s World Economic Forum Annual Meeting in Davos, I had the privilege of spending one week with the world leaders in… | 5 | Session “Ask About: AI and Diagnosis” at the Annual Meeting 2018 of the World Economic Forum in Davos, January 23, 2018 Copyright by World Economic Forum
AI & Blockchain Predictions in Davos: Myth or Reality?
During this year’s World Economic Forum Annual Meeting in Davos, I had the privilege of spending one week with the world leaders in business, government and civil society, discussing predictions about technology and politics. Three months after that intense week spent in the Magic Mountains, now that the snow is melting, have those predictions become reality?
The coverage of the Annual Meeting of the World Economic Forum 2018 was dominated by tech and innovation, in addition to Mr. Trump’s attendance, of course. Tech issues dominated the scene and the discussion both on social media (source: Brunswick Insight) and at the event. The conference’s agenda is a roadmap to key digital transformation technologies empowering the “Fourth Industrial Revolution”, a disruptive economic and societal concept introduced at the event in 2013, predicated on the confluence of physical, digital and virtual technologies. Five years later, the embracing of digital transformation, Artificial Intelligence and blockchain were among the most discussed topics in Davos.
1. Artificial Intelligence (AI)
AI is the new technological frontier over which companies and countries are vying for control, especially the US and China. According to the latest report from McKinsey, Google’s parent company Alphabet invested roughly $30 billion in developing AI technologies. Baidu, the Chinese search giant, invested $20 billion in AI.
AI has been on the scene for many years, and it’s now evolving so fast that it is going to change our lives. Google’s CEO Sundar Pichai compared artificial intelligence to the discovery of electricity or the mastery of fire, describing it as “probably the most important thing humanity has ever worked on.” “Even more than fire, as steam AI will act as a multiplier of human work,” reinforced Christian Lanng, CEO at Tradeshift. The role of AI to leverage the power of data is key. For me it was an honor to be part of the panel for the SOLVER Series in Davos to discuss data for healthcare. With Kees Aarts, Beth Weesner and Olivier Ouiller we discussed how data from different businesses can be used to better serve people’s lives and revolutionize preventive tech. AI combined with IoT will change the rules of the game completely.
Davos 2018 Prediction: Artificial Intelligence & Tech Geopolitics
The most mentioned business leader was George Soros, the investor and chairman of Soros Fund Management, who made headlines with his speech about tech geopolitics. “It is only a matter of time before the global dominance of the US IT monopolies is broken. Davos is a good place to announce that their days are numbered,” predicted Mr. Soros, as tech giants “are poised to dominate the new growth areas that artificial intelligence is opening up.”
China’s proportion of global AI startup funding as a percentage of dollar value. Image: CB Insights.
Three Months Later: Reality
China has taken the crown in AI funding, overtaking the US: a Chinese facial recognition surveillance company is now the world’s most valuable AI startup. In April 2018, SenseTime Group has raised funding from Alibaba and other investors at a valuation of more than $3 billion, becoming the world’s most valuable artificial intelligence startup. “In China there is an advantage in areas like facial recognition because of the privacy that exists in the U.S. and elsewhere in the EU, and some of the very best facial recognition technology in the world that I’ve seen is in China,” said Breyer Capital founder Jim Breyer, an indirect investor in SenseTime through IDG.
2. Blockchain & Crypto
Everything this year in Davos was about cryptocurrencies and Bitcoin. While in 2017 the event organized by WISeKey on “Blockchain and the Internet of Value” was an exclusive meeting with 300 delegates where the leading Blockchain expert Don Tapscott presented his book Blockchain Revolution, this year Carlos Creus Moreira, founder and CEO of WISeKey, was assaulted by a huge crowd in Davos. And blockchain came up in one panel discussion after the next. Everyone was excited about blockchain technology, naturally. And even more so about Bitcoin. Bitcoin value is ten times the value of the previous year, so this is not a surprise. What is behind Bitcoin and other crypto? Blockchain is a shared ledger technology that powers cryptocurrencies but also allows encrypted data on anything from money to medical records to be shared between companies, people and institutions. This protects data from fraud while instantly updating all parties concerned. There is an incredible number of businesses outside of cryptocurrencies that are leveraging blockchain and that will change the way we work dramatically.
Davos 2018 Prediction: Blockchain & Crypto
While the potential of blockchain, the underlying technology behind cryptocurrencies, was praised, bitcoin got slammed. “Bitcoin is a fraud,” a statement made by Jamie Dimon, CEO of JPMorgan Chase, raised many discussions, and in Davos he stated, “Cryptocurrency: it’s not my interest.” “There is no intrinsic value for something like bitcoin so it’s not really an asset one can analyze. It’s just essentially speculative or gambling,” reinforced Stephen Poloz, the governor of the Bank of Canada.
Copyrights World Economic Forum
Three Months Later: Reality
We all know that after Bitcoin almost hit 20,000 USD in December 2017, it had a big drop, and the decline continued after Davos: the BTC-USD rollercoaster is now between 7,000 and 9,000 USD.
Orbis Research has just released its new report, “Blockchain Technology Market Forecasts, 2017–2025”: the blockchain technology market, valued at approximately USD 350 million in 2016, is anticipated to reach up to USD 10.5 billion, growing at a lucrative rate of more than 50% over the forecast period 2017–2025. The market’s growth is attributed to the increasing penetration of cryptocurrency and ICO, and to the growing adoption rate of blockchain-as-a-service, blockchain to enable faster transactions. Moreover, the rising rate at which the blockchain technology is being adopted for payments, smart contracts and digital identities is creating significant opportunities for the global blockchain technology market.
The bottom line? Blockchain and crypto can no longer be ignored. Banks are calling on regulators to tackle the new crypto-markets such as ICOs quickly. “We can’t deny that things are changing,” says Benoit Legrand, chief innovation officer at Dutch bank ING. “The world will include cryptocurrencies in the way we work in the next ten years.”
Conclusions
Three months after the event, the big Davos predictions about AI and blockchain have not only been realized but have also become exponential We still don’t know how the future will look, what will work or how it will work. But there is no doubt that everyone is rushing to get ready for this evolution. Companies, sectors and countries are running a critical race to invest and discover the best technologies to leverage AI and blockchain. The time is now.
What were your Prediction? Have they become reality? Comment below with your perspective or connect with me here
About the Author: Giulia Zanzi is passionate about combining IoT and mobile technologies with science to improve people’s lives. As Head of Marketing Fertility in Swiss Precision Diagnostics, a Procter & Gamble JV, she led the launch of the first Connected Ovulation Test System that helps women to get pregnant faster by detecting two hormones and syncing with their phone. A former member of the European Youth Parliament, Giulia is currently serving on the Advisory Council of the World Economic Forum Global Shapers and she is a Lean In Partner Champion. Giulia graduated with honors at Bocconi University in Milan and holds a Masters at Fudan University in Shanghai.
| AI & Blockchain Predictions in Davos: Myth or Reality? | 0 | https-medium-com-giuliazanzi-ai-blockchain-predictions-in-davos-myth-or-reality-100fd9450b25 | 2018-04-17 | 2018-04-17 06:04:58 | https://medium.com/s/story/https-medium-com-giuliazanzi-ai-blockchain-predictions-in-davos-myth-or-reality-100fd9450b25 | false | 1,236 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | Giulia Zanzi | Passionate about combining IoT and mobile technologies with science to improve people’s lives. Head of Marketing Fertility @Clearblue @P&G JV @GlobalShapers | 83dbd5d9a6d5 | giuliazanzi | 30 | 223 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-06 | 2018-05-06 11:13:25 | 2018-04-22 | 2018-04-22 00:00:00 | 1 | false | en | 2018-05-06 | 2018-05-06 11:18:59 | 1 | 101080780ce7 | 1.90566 | 0 | 0 | 0 | To understand the sea change currently happening in the world of manufacturing, it is important to look at the historical perspective. We… | 5 | A Historic Phase Change in the Way We Build Things
Pavillion of a Chinese construction company at the 2015 World Expo in Milan / Photo by author
To understand the sea change currently happening in the world of manufacturing, it is important to look at the historical perspective. We can split history into the pre-industrial epoch, the time after the Industrial Revolution, and a new era, that we are currently entering.
Until the 19th century, the production of goods was a manual process. Even though craftsmen sometimes had simple machines at their disposal, each item was built by hand and became one individual, often unique, object.
This changed during the Industrial Revolution, which caused a dramatic shift to the manufacturing of large quantities of identical items. Many objects became standardized and the focus moved to the assembly of objects from as many off-the-shelf parts as possible, while trying to minimize the number of custom components and manual work. Engineers constantly strived to reduce complexity to bring down cost.
With the advent of Additive Manufacturing on an industrial level, we are adopting a new paradigm, where complex, highly customized objects are becoming the norm. A printer aggregates small pieces of matter according to a blueprint, indifferent to the simplicity or complexity of the instructions. The resulting object can almost be arbitrarily sophisticated with little impact on cost and manufacturing time.
3D printers were first used for applications like rapid prototyping, where fast turnaround times allowed designers to work iteratively. With the introduction of better materials and the increased sophistication of the output, printers started to be used in highly individualized end-product manufacturing, such as prosthetics.
This shift to Additive Manufacturing of end-use-parts is starting to give designers and engineers newfound freedom, to design objects that cannot be produced through traditional manufacturing. In these applications, the additive aspect of the printers is the key element to the production of completely enclosed parts or objects that use complex internal substructures to reduce weight or that contain functional elements. This transition is going to speed up in the coming years as printers will start to include multiple diverse materials and are able to incorporate the placing of electronics, sensors and actuators into the printed product. The results will be highly sophisticated objects with little or no assembly required.
With this phase change happening, the focus now shifts to the software side, which is the key element to enabling objects of significantly higher complexity.
— — — — — —
Lin Kayser is the CEO of Munich-based Hyperganic, where he and his team are reinventing how we design and engineer objects in an age of digital manufacturing and synthetic biology.
— — — — — —
This article was originally published on LinkedIn on April 22, 2018
| A Historic Phase Change in the Way We Build Things | 0 | a-historic-phase-change-in-the-way-we-build-things-101080780ce7 | 2018-05-06 | 2018-05-06 11:19:00 | https://medium.com/s/story/a-historic-phase-change-in-the-way-we-build-things-101080780ce7 | false | 452 | null | null | null | null | null | null | null | null | null | 3D Printing | 3d-printing | 3D Printing | 9,416 | Lin S. Kayser | Serial Entrepreneur - Speaker - Environmentalist. Working on the future of manufacturing. linkayser.com | 27d642e31d85 | linkayser | 10 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-04 | 2017-12-04 11:46:58 | 2017-12-04 | 2017-12-04 13:44:49 | 7 | false | en | 2017-12-04 | 2017-12-04 15:16:58 | 7 | 101092e5c3bc | 4.416038 | 21 | 1 | 0 | “The historian is a prophet looking backwards.” ― Friedrich Schlegel | 5 | Five tech trends that shaped 2017
“The historian is a prophet looking backwards.” ― Friedrich Schlegel
This post was originally published on VC Cafe. As we approach the last stretch of 2017, I wanted to take stock of the tech trends that shaped our year. In the next post, I’ll cover my predictions for 2018.
1. Decentralisation
Perhaps the most impactful trend this year is the proliferation of Blockchain technologies and cryptocurrencies into the mainstream.
On the Blockchain front we’ve seen a wide array of potential applications from real estate to art dealing and diamond trade.
On Crypto, we moved from Voice Over IP to Money Over IP, and saw Bitcoin cross the $10,000 line. ICOs (initial coin offerings) became a ‘thing’ — tokenise everything, with people spending over $1M to buy virtual cats with Ethereum on CryptoKitties (it’s been acquired since).
For a second it looked like White Papers were replacing the fundraising deck, with startups that struggled raising traditional funding completing multi millions ICOs seemingly overnight. Unfortunately, a large percent of ICOs feel like a potential scam, money gets taken off the table quickly and almost with no supervision, and with the only collateral at risk being reputation (in some cases, not even that).
This is just the beginning in my opinion, but regulation is likely to step in here very soon.
2. AI is the new UI
The hype around AI reached new heights in 2017. Using a decision tree to apply a set of rules, or operating a chatbot don’t necessarily qualify as using AI, but it is almost inevitable to avoid having some form of machine learning, deep leaning, NLP etc today’s tech startups.
As a field, AI made major breakthroughs this year, namely DeepMind’s AlphaGo decisive victory over the Go world champion, and then the improved version AlphaGo Zero, which was self taught and even better.
There’s no doubt that AI will continue to penetrate entire industries, in particular Automotive (self driving vehicles), robotics, drones, healthcare and marketing tech — from advertising to customer service.
Another aspect of the rise of AI is the infrastructure side: new chips from Nvidia, Google and Graphcore to fuel our growing need for fast data processing.
The Artificial Intelligence Index 2017, a Stanford report by AI Index (pdf) has some fantastic nuggets on the number of AI academic papers published, the number of enrolled students into AI courses, the growth rate of AI startups etc.
3. Data is the new oil
There’s one big problem with the perception of data being the new oil, the CEO of a successful AI startup told me. Large corporates are sure they are sitting on an oil field, and so spend millions to pour their data over to expensive data lakes, only to find that’s it’s hard to refine that crude oil (took the analogy all the way, I guess). Organisations are simply ‘sitting’ on their data, or paying for unproven expensive solutions. We are producing more data than ever in human history, and are getting better at understanding the patterns and the meaning of that data, but there’s still a lot of friction in getting that data and using it wisely.
For example, researchers can now predict the face of person based on a tiny sample of DNA. We are able to predict what customers will churn or upgrade simply by watching a small sample of their behaviour, and soon, we should be able to predict where/when a crime is about to happen, by applying models to surveillance data and past crime statistics.
Where does the line cross? Ethical considerations are becoming a major part of big data and machine learning startups, with several companies and industry bodies formed to tackle these questions.
4. Cyber is here to stay
Almost no weeks go by without the headline of a major hack. It seems like the Cyber security industry will only get bigger with more and more devices getting online, from our cars to our appliances.
We saw the rise of ‘Dark Marketing’, where advertisers are able to target individuals based on increasingly granular attributes (including race, religion, beliefs) and as Prof Scott Galloway said, “weaponise Facebook” as a platform to change public opinion.
Israeli startups attracted about 20% of the global funding for the security sector and saw the IPO of ForScout, reaching an $897M market cap.
5. GAFAM
5 companies now dominate tech (Google, Apple, Facebook, Amazon and Microsoft), or 7 if you add Alibaba and Tencent. Their power in the market is almost absolute for example, 99% of digital advertising growth is going to Facebook and Google. Just look at the size of Amazon compared to ALL OF RETAIL.
Their power is creating a public backlash — calling for tighter regulation on these companies dealings with privacy, data transparency and competition scrutiny.
It’s also getting increasingly hard to find a niche to compete with these giants, as they expand into to every major area from Cloud, messaging, hardware, enterprise, etc, adopting an AI first strategy. As an example, take a look at everything that Amazon announced at AWS re: Invent 2017.
In my next posts I will cover additional trends that dominated 2017, including Fake News, the Seed Slump, digital health, etc as well as some predictions for 2018. In the meanwhile, take a moment to sign up to my newsletter.
| Five tech trends that shaped 2017 | 102 | five-tech-trends-that-shaped-2017-101092e5c3bc | 2018-04-11 | 2018-04-11 05:48:34 | https://medium.com/s/story/five-tech-trends-that-shaped-2017-101092e5c3bc | false | 892 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Eze Vidra | Managing Partner at Remagine Ventures. Founder of Techbikers, Campus London and VC Cafe, proud Xoogler. Non exec director at Chargifi and UK Israel Business. | efb489ee7fe9 | ediggs | 7,926 | 1,389 | 20,181,104 | null | null | null | null | null | null |
0 | Predictions = Matrix.Multiply(Weights, Inputs)
Error = Matrix.Substract(Predictions, Outputs)
ΔR = ΔS x Transpose(P)
ΔP = Transpose(R) x ΔS
| 3 | 83d07eece3c7 | 2018-05-31 | 2018-05-31 18:44:52 | 2018-06-01 | 2018-06-01 11:21:17 | 8 | true | en | 2018-06-01 | 2018-06-01 15:49:19 | 5 | 1011884df84 | 5.348428 | 6 | 1 | 0 | In a previous post, we explained the basic principles behind back-propagation and how Neural Networks work. In this post, we will explain… | 5 | Vectorized implementation of back-propagation
In a previous post, we explained the basic principles behind back-propagation and how Neural Networks work. In this post, we will explain how to leverage optimised math libraries to speed-up the learning process.
What is vectorization and why it matters?
“Vectorization” (simplified) is the process of rewriting a loop so that instead of processing a single element of an array N times, it processes several or all elements of the array simultaneously.
Let’s start by an example of a dataset with 1000 houses sold in a specific city. For each house we have 5 information: its area, the number of rooms, the construction year, the price paid and the agency fees. The goal is to train a model that predicts the price and the agency fees from the first 3 features.
Dataset example
Let’s consider a simple linear feed-forward model with 6 weights (W11,W12,W13,W21,W22,W23) where:
Price = W11.Area + W12.NbRooms + W13.Year
Fees = W21.Area + W22.NbRooms + W23.Year
As explained in more details previously, the goal of the machine learning, is to find which values for these 6 weights fit the model’s output the closest to the dataset’s real output. We start by initialising the weights randomly. Then, we forward-propagate to calculate the predicted price and agency fees. By comparing the results with the real price and fees from the dataset, we can get a gradient of the error to back-propagate later and update the weights accordingly.
A simple implementation for this, would look something like this:
This sequential for loop on the dataset, is however too slow, and does not take advantage of modern parallelism in CPU and GPU.
Vectorizing forward-propagation
In order to achieve high performance, we need to transform the dataset into a matrix representation. If we take the column-based representation, every input from our dataset is copied to a column in the matrix.
Our weight matrix will be a matrix of 2 rows x 3 columns.
Our input matrix will be a matrix of 3 rows x 1000 columns.
Our output matrix will be a matrix of 2 rows x 1000 columns.
Our linear model that we are searching to solve, can then be presented in the following matrix-based form:
Matrix or a vectorized-form of: Weights x Inputs = Outputs
The reason why this representation works, is because this is exactly how matrix multiplication operates:
Matrix multiplication
Matrix-multiplication: a row i of the first matrix is multiplied by a column j of the second matrix to calculate the value of the cell (i , j) of the output
With the vectorized implementation, the previous for loop with 1000 iterations can now be done with very few, high-performance vectorized operations as following:
On big datasets, and using GPUs (some have 1000 cores), we can expect in thousands of times of speedup! On CPUs, there are many advanced math libraries that implement high-performance matrix operations such as openBLAS.
Vectorizing back-propagation
Vectorizing forward-propagation is easy and straightforward, it follows the model definition. The challenge is in vectorizing the back-propagation of errors.
With numbers, if we pass a number x through a function f to get y=f(x), the derivative f’ of f gives us the rate of change on y, when x changes.
With Matrices, we need to use the Jacobian, which is a Matrix made of the partial-derivatives in respect to the different elements of the input matrix.
The rational behind, is to fix all the elements in the input matrix except one, where we add a small delta 𝛿 to it and see what elements in the output matrix are affected, to which rate, and add them together. We do this for all elements of the input matrix, we get its gradient matrix at the input. Thus it has the same shape (number of rows and columns).
Consider the following matrix operation. RxP=S, (And the first 3 equations that calculates the first 3 elements of the output S)
Equation 1: s11 = r11.p11 + r12.p21 + r13.p31 (red output)
Equation 2: s12 = r11.p12 + r12.p22 + r13.p32 (green output)
Equation 3: s21 = r21.p11 + r22.p21 + r23.p31 (yellow output)
Let’s say we already have the gradient matrix ΔS at the output S, and we want to back-propagate it to the input R (respectively P) to calculate ΔR (resp. ΔP). Since r11 is only involved in the calculation of s11 and s12 (red and green but not yellow), we can expect that only 𝛿s11 and 𝛿s12 back-propagate to 𝛿r11.
In order to find the rate of back-propagation of 𝛿s11, we partially derivate equation 1 in respect to r11 (and consider everything else as constant), we get the rate of p11 (Another way to explain this: a small change in r11, will be amplified by a p11 factor in s11).
By doing the same for 𝛿s12 and equation 2, we get the rate of p12.
If we try to back-propagate 𝛿s13 to r11, and derivate equation 3 in respect to r11, we get 0 since equation 3 does not depend at all on r11. Another way to see this: if we have an error or s21, there is nothing that can be done on r11 to reduce this error. Since r11 is not involved in the calculation of s21! The same applies for all the other elements of the S matrix (s21,s22,s31,s32).
Finally, by adding up, we get 𝛿r11=𝛿s11.p11 + 𝛿s12.p12
By doing the same for all the elements of matrix R, we get the following:
𝛿r11=𝛿s11.p11 + 𝛿s12.p12
𝛿r12=𝛿s11.p21 + 𝛿s12.p22
𝛿r13=𝛿s11.p31 + 𝛿s12.p32
𝛿r21=𝛿s21.p11 + 𝛿s22.p12
𝛿r22=𝛿s21.p21 + 𝛿s22.p22
𝛿r23=𝛿s21.p31 + 𝛿s22.p32
𝛿r31=𝛿s31.p11 + 𝛿s32.p12
𝛿r32=𝛿s31.p21 + 𝛿s32.p22
𝛿r33=𝛿s31.p31 + 𝛿s32.p32
If we look closely to the pattern, we see we can put it in a vectorized matrix multiplication way as following:
Similarly, if we follow the same procedure to back-propagate to P, we get the following equation:
Vectorizing everything
Each Neural network layer is made of several mathematical operations. If we manage to define each mathematical operation in term of matrix operations in both forward and backward passes, we get maximum speedup in learning.
In a first step, each Matrix M has to be augmented by a companion Matrix ΔM to hold its gradient on the way back.
In a second step, each Matrix operation, has to define its own forward and backward operations. For instance:
After creating this vectorized library of Mathematical operations, we can use this library to chain operations and create layers, activation functions, loss functions, optimizers. The back-propagation will be defined automatically as a callback stack (or calculation graph) of the mathematical functions used in each layer. This is how Tensorflow works to a certain extent.
We can see the full machine learning process as a stack of abstractions:
An abstraction of machine learning library stack
| Vectorized implementation of back-propagation | 38 | vectorized-implementation-of-back-propagation-1011884df84 | 2018-10-23 | 2018-10-23 14:00:05 | https://medium.com/s/story/vectorized-implementation-of-back-propagation-1011884df84 | false | 1,117 | DataThings blog is where we post about our latest machine learning, big data analytics and neural networks experiments. Feel free to visit our website: www.datathings.com | null | datathingslu | null | DataThings | datathings | MACHINE LEARNING,NEURAL NETWORKS,BIG DATA,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN | DataThingsLu | Machine Learning | machine-learning | Machine Learning | 51,320 | Assaad MOAWAD | null | 3f743663cf67 | assaad.moawad | 178 | 99 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 32881626c9c9 | 2018-05-15 | 2018-05-15 15:58:15 | 2018-05-15 | 2018-05-15 16:08:53 | 4 | false | en | 2018-07-25 | 2018-07-25 20:32:01 | 5 | 10156fa9a37d | 6.069811 | 10 | 0 | 0 | Organizations that deploy cognitive services will work more efficiently, safely, and sustainably. | 5 | Building Smarter Businesses With Cognitive Services
Today’s successful businesses aren’t just fast and efficient. They’re becoming truly smart thanks to a new breed of technology called “cognitive services.”
The term “cognitive services” describes machine learning, artificial intelligence, and distributed algorithms that make it easy to integrate vision, speech, language, knowledge, problem-solving, analysis, categorization, moderation, and more into apps and businesses.
Cognitive services enable applications to evolve and adapt rather than simply following prewritten rules. They augment and expand human capabilities, allowing us to do our jobs faster, more efficiently, and more sustainably.
Cognitive computing doesn’t aim to replace the human element but to extend human capabilities. Humans can think deeply and use reason to solve complex problems, but we lack the ability to analyze and process massive amounts of data. That’s where computers excel. Cognitive computing era makes the most of both strengths: the human’s and the machine’s.
As cognitive systems solve complex problems, they improve their efficiency and accuracy building and acting upon sophisticated pattern recognition models. These systems aren’t explicitly programmed to work in a fully prescribed way but to naturally interact with human data inputs, then learn and grow based on the data they accumulate.
Big players like Watson, AWS, and Microsoft, as well as fast-moving startups like SiftNinja and Clarifai, have released a massive number of cognitive services, delivering them through APIs that make them a snap to fold into new applications.
Examples of Cognitive Services
Translation: Enable two users to chat in their own — different! — languages by translating their messages in real-time.
Natural language processing: Analyze massive amounts of data inputs and gauge the sentiment of the messages.
Chatbots: Create an intelligent bot that parses natural language from a human and responds as accurately as another human could.
Facial recognition: Detect human faces and organize them into groups based on predetermined categories.
Machine learning: Intelligently sense, process, and act on information delivered by sensors to control devices in response to environmental factors like temperature, rain, or earthquakes.
The Impact of Cognitive Services
Cognitive services are used to create new types of customer engagement, build smarter products, improve internal operations, and make smarter decisions. Cognitive services have already made a significant impact on three areas of business.
Discovery
With the vast amounts of data and information they have at their disposal, applications can use cognitive services to find patterns, insights, and connections that the hardest-working human might never identify. And having found patterns once, they can create new and unanticipated ways to adapt and grow, making discovery a more accurate and efficient proposition.
Engagement
Cognitive services empower businesses to see, hear, speak, understand, and interpret natural language and information sets, enabling them to create new, engaging experiences for users, customers and themselves. By understanding and responding to the ways users interact with apps and each other, cognitive systems are changing the way humans and systems interact.
Decision
The most challenging but potentially revolutionary impact of cognitive services is on the decision-making process. Intelligent systems can rapidly weigh evidence and analyze information, then make a decision based on data, not hunches. They can consider and act on complex sets of information — something as simple as recommending a product on an e-commerce site or as complex as optimizing smart devices in an industrial setting.
The Rise of Cognitive Business
Organizations that deploy cognitive services will work more efficiently, safely, and sustainably, and deliver more engaging and immersive experiences to their customers. From the way we buy goods to the way our children learn to the food that we eat, they will drive the innovation of industries and organizations into the future.
The question is, How can you get started implementing dynamic cognitive services into your business today? To start, look at a couple things:
What are the biggest inefficiencies affecting your workflow today?
What are your most significant customer complaints?
What processes represent the biggest bottlenecks?
With answers in hand, you’ll be ready to explore the vast range of cognitive services at your fingertips and discover how they can transform your business.
Intelligence at the Edge: Event-Driven Architecture
Event-driven architecture provides an efficient way to carry out cognitive tasks. It applies basic business logic while data is in motion and can decide whether to involve back-end processes.
Cognitive services are quickly changing applications and the businesses that deploy them. Using APIs from companies like IBM, AWS, and Microsoft, developers can leverage some of the world’s most sophisticated technology for computer vision, translation, sentiment analysis, and much more with just a few lines of code.
To get the most out of cognitive services, many developers are adopting a design pattern called event-driven architecture.
As the name suggests, event-driven architecture makes software change its behavior in response to events in real-time. Event-driven architecture is different from traditional request-response architectures such as REST in that an event-driven system broadcasts a notification when a predefined event occurs rather than following along a set path of subsequent subroutines.
This notification may be picked up by any number of other systems, whose use of the information is decoupled from the original event. It’s a way to create faster, more dynamic, more distributed, and independent applications, allowing you to trigger and execute business logic at the edge, with each system informed by, but not necessarily reliant upon, the next.
Image Source: Moving the Cloud to the Edge
What Is the Connection With Cognitive Services?
The event-driven design pattern provides a fast and efficient way to carry out cognitive tasks. Instead of sending all your data to an external server, having that server parse the data, and figuring out what action to take, it applies basic business logic while the data is in motion, directly in your network, and can decide whether to involve back-end processes. This way, you aren’t wasting valuable bandwidth or computational power sending data that never needed to travel back to home base for processing.
As a result, it becomes possible to build powerful cognitive applications right where the intelligence is applied: at the edge of the network.
Because cognitive services are delivered as discrete components, you can add them via serverless microservices and process data in real-time without the need for ingestion by a centralized data center unless it is truly necessary.
A Case Study: Yummy Cola
Let’s take one example. Say a beverage company called Yummy Cola is launching a new line of flavored colas leading up to, and during, this year’s Super Bowl. It wants to monitor brand reaction through social media channels but knows the #superbowl hashtag will be incredibly busy with game analysis and the activity of other brands. It needs a way to filter their brand mentions and gauge the sentiment of how users feel about their product launch.
To do this at scale would cost a fortune, and without an event-driven architecture, sending every user’s message to a central server or data center to process and analyze would be incredibly slow. An event-driven system will be much more efficient, using cognitive services to carry out basic business logic at the edge.
In this way, the brand can monitor each message, determine whether it refers to the new colas, parse the sentiment of the relevant ones, and only pass the relevant information to the back end.
To do this, it could deploy edge computing resources to filter the messages with a natural language processing service, identifying which messages mentioned the brand and which were unrelated. From there, it could use a different cognitive service to analyze people’s feelings about the different colas. It could even publish the popularity of the different products. And it could do all this without bringing the back-end servers into play.
Architecture for the Edge — and Beyond
RESTful architectures were well-suited to an earlier, simpler generation of web applications. Modern applications demand a different approach, with their dense mesh of microservices, edge-computing nodes, and streams of data from sensors and devices.
What applications need most now is an architecture that is light, flexible, and decentralized. Event-driven architecture satisfies on all counts — an elegant example of form following function.
Want in-depth analysis of how cognitive services are changing everything? Check out our full eBook: A World Transformed: Building Smarter, Next Generation Apps with Cognitive Services. In it, we cover:
What are cognitive services?
How cognitive services are transforming business
Cognitive services and edge computing
Use cases of today and tomorrow
Originally published at dzone.com.
| Building Smarter Businesses With Cognitive Services | 138 | building-smarter-businesses-with-cognitive-services-10156fa9a37d | 2018-07-25 | 2018-07-25 20:32:01 | https://medium.com/s/story/building-smarter-businesses-with-cognitive-services-10156fa9a37d | false | 1,423 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Joe Hanson | Dev Rel @PubNub | a12a4aa34693 | joehanson | 2,188 | 1,208 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 22a2beb5a88a | 2017-11-07 | 2017-11-07 15:17:57 | 2017-11-13 | 2017-11-13 20:22:38 | 7 | false | en | 2017-12-18 | 2017-12-18 22:53:50 | 12 | 1015a273f75d | 4.936792 | 9 | 0 | 0 | Since the launch of the Watson Visual Recognition API, we’ve seen users help California save water, perform infrastructure inspections with… | 5 | Best Practices for Custom Models in Watson Visual Recognition
Since the launch of the Watson Visual Recognition API, we’ve seen users help California save water, perform infrastructure inspections with drones, and even find Pokemon. Powering many of these use cases are custom classifiers, a feature within Visual Recognition that allows users to train Watson on almost any visual content.
To create custom classifiers, users define categories they want to identify and upload example images for those categories. For example, a user wishing to identify different dog breeds may create 4 classes (golden retrievers, huskies, dalmatians, and beagles) and upload training images for each class. You can find this exact example in the Watson Visual Recognition demo or explore other tutorials on custom classifiers.
Custom classifiers can be highly powerful but require careful training and content considerations to be properly optimized. Through our user conversations, we’ve assembled a best practices guide below to help you get the most out of your custom classifiers.
How training can increase Watson Visual Recognition’s quality
The accuracy you will see from your custom classifier depends directly on the quality of the training you perform. Clients in the past who closely controlled their training processes have observed greater than 98% accuracy for their use cases. Accuracy — different from confidence score — is based on a ground truth for a particular classification problem and particular data set.
“Clients who closely control their image training processes observed greater than 98% accuracy
As a best practice, clients often create a ground truth to benchmark against human classification. Note that often humans make mistakes in classifications due to fatigue, reputation, carelessness, or other problems of the human condition.
On a basic level, images in training and testing sets should resemble each other. Significant visual differences between training and testing groups will result in poor performance results.
There are a number of additional factors that will impact the quality of your training beyond the resolution of your images. Lighting, angle, focus, color, shape, distance from subject, and presence of other objects in the image will all impact your training. Please note that Watson takes a holistic approach when being trained on each image. While it will evaluate all of the elements listed above, it cannot be tasked to exclusively consider a specific element.
The API will accept as few as 10 images per class, but we strongly recommend using a significantly greater amount of images to improve the performance and accuracy of your classifier. 100+ images per class is usually a good starting point to get more robust levels of accuracy.
What is the score that I see for each tag?
Each returned tag will include a confidence score between 0 and 1. This number does not represent a percentage of accuracy, but instead indicates Watson’s confidence in the returned classification based on the training data for that classifier. The API will classify for all classes in the classifier, but you can adjust the threshold to only return results above a certain confidence score.
The custom classifier scores can be compared to one another to compare likelihoods, but they should be viewed as something that is compared to the cost/benefit of being right or wrong, and then a threshold for action needs to be chosen. Be aware that the nature of these numbers may change as we make changes to our system, and we will communicate these changes as they occur.
Further details about scores can be found here.
Examples of difficult use cases
While Watson Visual Recognition is highly flexible, there have been a number of recurring use case that we’ve seen the API either struggle on or require significant pre/post-work from the user.
Face Recognition: Visual Recognition is capable of face detection (detecting the presence of faces) not face recognition (identifying individuals).
Detecting details: Occasionally, users want to classify an image based on a small section of an image or details scattered within an image. Because Watson analyzes the entire image when training, it may struggle on classifications that depend on small details. Some users have adopted the strategy of breaking the image into pieces or zooming into relevant parts of an image. See this guide for image pre-processing techniques.
Emotion: Emotion classification (whether facial emotion or contextual emotion) is not a feature currently supported by Visual Recognition. Some users have attempted to do this through custom classifiers, but this is an edge case and we cannot estimate the accuracy of this type of training.
Examples of good and bad training images
GOOD: The following images were utilized for training and testing by our partner OmniEarth. This demonstrates good training since images in training and testing sets should resemble each other in regards to angle, lighting, distance, size of subject, etc. See the case study OmniEarth: Combating drought with IBM Watson cognitive capabilities for more details.
Training images:
Testing image:
BAD: The following images demonstrate bad training since the training image shows a close-up shot of a single apple while the testing image shows a large group of apples taken from a distance with other visual items introduced (baskets, sign, etc). It’s entirely possible that Watson may fail to classify the test image as ‘apples,’ especially if another class in the classifier contains training images of a large group of round objects (such as peaches, oranges ,etc).
Training image:
Testing image:
BAD: The following images demonstrate bad training since the training image shows a close-up shot of a single sofa in a well-lit, studio-like setting while the testing image show a sofa that is partially cut off, farther away, and situated among many other objects in a real world setting. Watson may not be able to properly classify the test image due to the number of other objects cluttering the scene.
Training image:
Testing image:
Need help or have questions?
We’re excited to see what you build with Watson Visual Recognition, and we’re happy to help you along the way. Try the custom classifiers feature, share any questions or comments you have on our developerWorks forums, and start building with Watson for free today.
Originally published at www.ibm.com on October 24, 2016.
| Best Practices for Custom Models in Watson Visual Recognition | 33 | best-practices-for-custom-classifiers-in-watson-visual-recognition-1015a273f75d | 2018-04-27 | 2018-04-27 20:40:43 | https://medium.com/s/story/best-practices-for-custom-classifiers-in-watson-visual-recognition-1015a273f75d | false | 1,030 | AI Platform for the Enterprise | null | ibmwatson | null | IBM Watson | ibm-watson | IBM WATSON,ARTIFICIAL INTELLIGENCE,CLOUD SERVICES,MACHINE LEARNING,DEEP LEARNING | ibmwatson | Machine Learning | machine-learning | Machine Learning | 51,320 | Kevin Gong | Product manager @IBMWatson. Photographer. UX/UI designer. DIYer. Data tinkerer. Social good supporter. Formerly @McKinsey, @TEDx, @Cal, @ColumbiaSIPA | 8022025e9700 | kmgong | 533 | 568 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-13 | 2017-12-13 05:20:07 | 2017-12-14 | 2017-12-14 23:27:01 | 4 | false | en | 2018-09-01 | 2018-09-01 06:08:33 | 9 | 1015d9e326ae | 4.560377 | 2 | 0 | 0 | Magnus Carlsen is the world’s highest-rated chess player, and he doesn’t spend all day playing chess. What role does that play in his… | 5 | To perform like Magnus… relax like Magnus?
Magnus Carlsen is the world’s highest-rated chess player, and he doesn’t spend all day playing chess. What role does that play in his success?
Magnus Carlsen, 27, is the highest-rated (human) chess player of all-time and has been world chess champion since 2013. His first championship victory is captured in the very-enjoyable Magnus, now available on Netflix everywhere.
Magnus, as one might expect, puts in hours of deliberate practice in trying to become the best at chess. He is quizzed by his head coach about historical positions; he analyzes his own games; he reads chess news to understand developments in other players.
But something else stands out about Magnus in the documentary: He spends a good amount of time doing things that aren’t quite chess-related.
Soon-to-be-World Champion Magnus Carlsen at his training camp in South Norway. Magnus asks “Why do I do that?” about a missed volleyball hit — but he could also be asking a more meta question.
Repeatedly in the film, Magnus is shown playing volleyball, ping pong, and swimming, even in the run-up to critical matches. At times when many others would be inclined to hunker down and cram, Magnus seems to find continued ‘distractions’ in other domains.
Now, I will not pretend to have a comprehensive accounting of Magnus’s practice hours; perhaps this is only a trivial subsection of Magnus’s week. Additionally, Magnus is certainly not a slacker; he is shown studying chess when surrounded by family, for instance, when it would be easy to study just a bit less.
But Magnus’s overall preparation style — and the varied activities — does not escape notice of his head coach, particularly compared with the documentary’s portrayal of then-World Champion Vishy Anand: “You know, [Magnus] may not always be the most serious guy in training. But in his head, he has stuff going on. It’s a different kind of approach than maybe other kinds of schools.”
Magnus plays ping pong with his head coach in lead-up to the chess world championship.
Magnus without question puts in legwork for chess, but he also finds room for other activities. Without getting overly-rigorous (and at risk of leaning too pop-psychological), I think there are a few questions to consider for one’s own performance after observing how Magnus spends portions of his preparation time:
Would your performance benefit from time for more-complex realizations to form? As Magnus’s coach notes, his time playing ping pong is not dead time from training, but rather time to gradually work over things without full-force thinking. Plenty of activities involve insight problems that aren’t necessarily best solved by thinking harder or longer; many innovations arise from taking a concept in one domain and applying it to another. For Magnus, sports provide this ‘idle work’ time, just as many people find that taking walks encourages their best thinking. As an additional upside, sports help to keep the body active and healthy.
Would your performance benefit from time to mentally recharge? Though this is not explicitly discussed as a factor in the documentary, I suspect a large reason for Magnus’s breaks is that studying chess is exhausting. On a biological level, Stanford neuroscientist Robert Sapolsky contends that grand masters can burn thousands of calories per day in the course of a chess tournament (though it need not be several thousands to be significant). Even without the calorie consideration, many workers are only capable of peak productivity for stints of ~3 hours — though this isn’t ironclad, and someone like Magnus very well may retain focus for longer. At some point, however, everyone will face diminishing returns — and when faced with diminishing returns, why not play ping pong? (Or sleep.)
This red-lined engine is at a balmy 37C, just like the human body. Coincidence? I think not.
Would your performance be more sustainable with allowances for ‘sub-optimal’ activities? This question is also not discussed in the documentary, but I do wonder if Magnus’s working in more time for sports and friendships will allow him to sustain peak performance for longer. On one hand, time spent in these ways might trade off with time spent training on chess and could in theory lead to worse short-run outcomes (though as discussed above, perhaps not). On the other hand, Magnus is a human being, and if these interests keep him happy, healthy, and motivated to keep pursuing his goals, his hobbies may well end up being instrumental to his success even if locally suboptimal. Put more simply, this is a question of avoiding burnout: If taking on certain stresses and time pressures will cause you to redline, perhaps they aren’t the right path for your long-term career goals.
I continue to be inspired by Magnus’s journey to the top of the chess world, as well as by the feats of winetasters in Somm and friends of mine from the national debate community. (Not to mention my friend Max, who recently traveled to Germany to play Magnus in a game of chess… Max lost but clearly hasn’t been playing enough ping pong.)
Wine: It’s serious business. (Screenshot from Somm)
There’s something fascinating about peak performance and the focus it inspires in people — and particularly when people seem to have found a balance between that focus and continuing to achieve their goals.
Of course, in the future, computers might not face these tradeoffs that can constrain human performance today. (The Stockfish and AlphaZero chess engines, for instance, don’t step away from chess to have dinner with their loved ones, at least as far as we know.)
It’s worth considering, then, how demands on human performance may evolve over time, particularly as computers expand deeper into the realm of human activities and things previously considered art.
At chess, we can say with confidence that Magnus aka “The Mozart of Chess” and the highest-rated human ever < Stockfish (the go-to chess engine for human preparation) < AlphaZero (a self-playing reinforcement learning agent that ran for less than a day [on incredibly high-end hardware]).
Unfortunately for Magnus’s computer-beating prospects, no amount of sleep or time training in other domains is likely to reverse that — but he might pose a model for us to increase our own productivity in the interim.
Steven Adler is a former strategy consultant focused across AI, technology, and ethics.
If you want to follow along with Steven’s projects and writings, make sure to follow this Medium account. Learn more on LinkedIn.
| To perform like Magnus… relax like Magnus? | 22 | to-perform-like-magnus-relax-like-magnus-1015d9e326ae | 2018-09-01 | 2018-09-01 06:08:33 | https://medium.com/s/story/to-perform-like-magnus-relax-like-magnus-1015d9e326ae | false | 1,023 | null | null | null | null | null | null | null | null | null | Chess | chess | Chess | 1,041 | Steven Adler | I work at the intersection of AI, ethics, and business strategy; thoughts are my own. www.linkedin.com/in/sjgadler | c04544a536f5 | sjgadler | 305 | 284 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-27 | 2018-02-27 13:29:52 | 2018-02-27 | 2018-02-27 21:05:24 | 5 | false | en | 2018-02-27 | 2018-02-27 21:05:24 | 0 | 1017fa6c1615 | 3.425786 | 1 | 0 | 0 | I encountered JupyterLab recently on one of my Twitter journeys by seeing this tweet: | 2 | Installing JupyterLab
I encountered JupyterLab recently on one of my Twitter journeys by seeing this tweet:
Why is this such an interesting feature? Well, typing . followed by TAB completion normally lists all methods and attributes that are associated with that object. If we were to perform a . followed by TAB on a common object like an integer, we would see a number of methods that can be called on that integer.
This is an extremely useful feature that is part of most IDEs (Interactive Development Environments).
This becomes more difficult when you have a situation where the object you are interested in is enclosed inside a set of brackets, for example: (3+1). What would happen if we attempt to perform the . TAB completion on this object?
Oops. This has now been autocompleted with .ipynb_checkpoints/, which is not a useful outcome at all. The reason why this occurs is that the . TAB completely ignores the object in brackets, and autocompletes as if it wasn’t there. (To be convinced of this, notice that this is exactly what happens if we . TAB in an empty cell)
If this feature were to work, then somehow the application would have to know that the result of (3+1) is an integer, and that it should treat the brackets in exactly the same way that it treats the integer. This has really bugged me in the past, but I could not think of a clear and robust way to frame the problem in order to contribute a solution.
I have been a long time user of Jupyter Notebooks for my work, and I regularly encourage others to use it who would like to work on Data Science projects. These recent updates to JupyterLab looked really cool.
Having already installed Jupyter Notebook within my Anaconda environment, I proceeded to install JupyterLab.
conda install -c conda-forge jupyterlab
This results in a successful installation without any pain at all. I launch the notebook with:
jupyter lab, and navigate tohttp://localhost/8888 in my browser.
So far, so good. Now to replicate this great Tab completion feature that attracted me to download this project in the first place:
Experimenting with TAB completion.
Huh? This is not the moment of satisfaction that I was expecting at all. Now, I have been working with data science software for two years now, I am sure that new users of Jupyter and JupyterLab would probably just give up when something like this happens. This is why I want to document my entire thought process in how I debugged this issue.
What software versions am I using?
This is a very common way to identify the source of a problem. After navigating to the JupyterLab website, I opened up the Binder link.
This opens up a version of JupyterLab on a webserver in the cloud, where you can immediately start hacking. (In another post, I would like to explain why I think Binder is such an awesome service to people working in data science.)
After opening up a new notebook, and investigate the . TAB completion as before, we can see that it works!
So what is going on here?
Well, clearly there is something different between my installation of JupyterLab and the version on JupyterLab that is launched through Binder.
Another benefit of JupyterLab is that it has a terminal console that is available within the environment. Using this, I can find out that the version of JupyterLab on the Binder instance is 0.31.8 — the same as I have on my machine!
Hmm — so if it is not a problem with the version of JupyterLab, what could it be a problem with? Within the JupyterLab documentation, I start to the get the impression that JupyterLab and Jupyter Notebook are actually two seperate projects. Therefore even if I have the state of the art version of JupyterLab on my system, the notebook need not be.
This turned out to be the cause of the issue. On the Binder version, jupyter notebook was version 5.4.0, and on my own computer the version was 5.0.0.
Sometimes, you just need to check your versions.
| Installing JupyterLab | 1 | installing-jupyterlab-1017fa6c1615 | 2018-06-13 | 2018-06-13 20:54:50 | https://medium.com/s/story/installing-jupyterlab-1017fa6c1615 | false | 687 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Will Jones | Former pres @UCLEntrepreneur | CS @Cambridge_Uni, Fellow @ASIDataScience | Mathematical Genomics @emblebi | open science, genomics, data security. | 76a7c2c51c91 | willgdjones | 113 | 120 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-03 | 2018-09-03 22:50:23 | 2018-09-03 | 2018-09-03 23:25:07 | 3 | true | en | 2018-09-03 | 2018-09-03 23:25:07 | 3 | 1018533fb246 | 4.078302 | 0 | 0 | 0 | The Painstakingly Slow Beginning of it All | 5 | Making the Switch
The Painstakingly Slow Beginning of it All
Nothing punctuates the passage of time quite like a birthday in your mid 30s. You wake up one day and the realization sinks in that not only are you not Zuckerberg rich, but you’re also still single, in student loan debt, your recent health screening makes the fact that you are adult very real (thanks, alcohol), and you’re not quite living your truth just yet.
That being said, I’ve made a pivotal decision in my life to change course. That’s the “switch” I’m speaking to. I’ve accepted the fate that one day soon we’ll have robot overlords, and now I’m trying desperately to get ahead of that by learning about artificial intelligence and machine learning. (For anyone reading this who knows a lot about AI, I’m well aware that artificial general intelligence is “supposedly” not that close. C’mon, give me some credit, I’m just trying to inject some humor into our dark Skynet fate. Although, this Futurism article about AI “remembering” did just come out on Friday.)
To be fair I’m not starting from zero. I’m starting from somewhere maybe around 1.5… mayyyybe 2. I’ve been dabbling in Python for years, printing “hello world” and basic if/elif scripts like a champ. My for loops could definitely use some work.
Deep down inside, part of me wishes I was starting from zero, and then I’d feel better about the fact that I’ve essentially wasted 7 years of my life piddling around the perimeter. I mean, I’m pretty sure I’d be at PhD level and running Deep Mind by now if I hadn’t slacked so much on learning these skills.
Conventional wisdom and fortune cookies state that a journey of a million miles starts with a single step. This is an attempt to chronicle my steps.
Three Practical Steps to Success
Step 1: Google “how to master Python in 6 months”… four months ago and then do nothing until after your 33rd birthday.
6 out of 8 million links to best ways to learn Python well and relatively quickly. No need to drag it on.
Step 2: Make a decision that you’re going to learn it, no slacking this time, and you’re going to stop making bad decisions with your time and bite the bullet and do it for reals this time. DAMMIT. And then Google “How to become a data scientist?” realize that the original post you were working off of back in Step 1 is now obsolete on Quora, and you can’t find it anymore, and now you have to start from zero and just do some weird amalgamation of what you can find.
Step 3: Write a blog post about how you’re going to learn about Python and data science instead of actually learning about Python and data science. Check and double check. ✅✅
But seriously, going back to more reasons why I called this little social experiment the Switch.AI, because I’m both passionate and serious about changing my life and learning these skills that are not only high in demand, but forward looking and isn’t a complete departure from my current job, but it’s also not at all what people are expecting… Little do they know what’s in store. This is just a chronicle of how I’m going about doing it from mediocre coding skills and knowledge to something more concrete. I just so happen to enjoy writing so this is also something of a catharsis, because I want to share the journey of blood, sweat and mostly tears as I go.
The New Plan to Success (One step at a time)
The real step 1 is to make my way through A Byte of Python over the next week. I actually audited the Charles Severance’s University of Michigan Programming for Everybody (Getting Started with Python) course some time ago… back when Coursera wasn’t a sellout charging people for auditing classes and what not, and I also tried Andrew Ng’s Machine Learning class before Coursera even existed. I never made it all the way through, even though I tried at least 2–3 times after that, but didn’t have the requisite programming skills or mindset or motivation to understand why I needed these skills yet. I’ve got a plan now, so watch out world.
Basic Python FTW!
Also, to be perfectly candid, I do have a big girl job, a boyfriend, and social obligations like going to see Kermit Ruffins play at the bar on Wednesday (#priorities).
Me and my friends dancing at Kermit Ruffins show.
However, when I make a decision and I really stick my mind to it, I can become quite obsessive. Like that time I finished all of Breaking Bad and Episode 1 to Season 7 of The Walking Dead in approximately a week each. Again… #priorities.
Also I’m going to stack my reading list in favor of AI books now, instead of murder books (sorry, book club). Here’s what’s on the list:
I, Robot — Isaac Asimov
Göedel, Escher, Bach: An Eternal Golden Braid — Douglas Hofstadter
How to Create a Mind — Ray Kurzweil
On Intelligence — Jeff Hawkins & Sandra Blakeslee
Super Intelligence — Nick Bostrom
Incognito and The Brain — David Eagleman
And then I’m sure I’ll find some heavy data science and machine learning books along the way (re: the screenshot of Quora notes above on all the ways I can learn).
Wish me luck, and I’ll be back soon to let you know how A Byte of Python goes and maybe I’ll finish out 1 of the books I’ve already started on the list. Also, here’s some songs you might like that I put on today to comfort my scaredy dog in the rain, and while feels like the end of summer, I found them both comforting myself.
| Making the Switch | 0 | making-the-switch-1018533fb246 | 2018-09-04 | 2018-09-04 15:28:07 | https://medium.com/s/story/making-the-switch-1018533fb246 | false | 935 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | The Switch.AI | A journey from zero to hero (hopefully) in artificial intelligence, machine learning and data science. #tbd | b4dfb3d285ba | theswitch.ai | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-29 | 2018-04-29 02:15:51 | 2018-05-01 | 2018-05-01 23:13:34 | 1 | false | en | 2018-05-01 | 2018-05-01 23:13:34 | 2 | 10189b9b7250 | 2.909434 | 0 | 0 | 0 | Technically, artificial intelligence (AI) occurs whenever you use a computer to perform an “intelligent” task. The first people to do this… | 2 | How Can AI and Predictive Analytics Enhance Your Business Performance?
By National Security Agency [Public domain], via Wikimedia Commons
Technically, artificial intelligence (AI) occurs whenever you use a computer to perform an “intelligent” task. The first people to do this were probably Turing and his team who used a giant mainframe computer they named Enigma to break German codes during World War II.
This begs the question of what is intelligence from a computing point of view. We can think of intelligence as the ability to learn. Enigma was able to “learn” through millions of decoding attempts what made sense as a code, versus what didn’t.
As the use of AI expanded, the parallel field of machine learning developed with it. For artificial intelligence to exist, machines needed to learn.
How do machines learn? They are able to mine data at a level previously impossible to humans alone. Much of the research in this area was initiated by businesses wanting to make sense from the masses of data they collect — much of it unstructured and previously of little value to them.
Businesses believe that they can create an advantage over their competitors if they can gain value from the screeds of data they and their customers generate. They are always looking for the next big trend so they can be prepared before their competitors are aware of it.
The primary purpose of data mining is to predict consumer behavior. And ultimately, the main goal of AI is to develop sophisticated software to allow machines to predict behavior better.
AI and Machine Learning Can Lead to Better Predictive Analytics
While predictive analytics uses a wider range of techniques than just data mining, it does have much of the same intent as AI. Predictive analytics also aims to predict future behavior more accurately.
Gartner sees predictive analytics as having four key components:
1. An emphasis on prediction
2. Rapid analysis measured in hours or days (unlike data mining which can take much longer)
3. An emphasis on the business relevance of the results
4. Focus on ease of use
Using this definition, it can be seen that a key purpose of predictive analytics is to enhance business performance. AI is a vital tool to help firms predict what their customers genuinely desire.
Forward-thinking businesses use the myriads of data they collect to train predictive machine learning models. For example, Kairos who make facial recognition software, initially fed thousands of images of different types of faces into their computers to teach the system what faces look like. Once the system could distinguish a face from other parts of an image, Kairos was able to begin training the system to recognize specific faces. Not only can their system predict the likelihood that an image features a face, but it can also predict with accuracy, whether it contains a particular face. This has many real-life applications, particularly in areas such security screening.
Similarly, businesses can use machine learning routines to identify wear patterns that may indicate that essential factory machinery is in danger of failing and needs repair. They use predictive analytics techniques on this data to establish signals to could help them create ideal maintenance schedules to ensure that the machines rarely if ever break down.
Large retailers such as Amazon collect vast volumes of data to track their customers’ preferences, search habits, and a general indication of likes and interests. They then apply predictive analytics to this data to create customized and personalized shopping suggestions.
WorkFusion works with global retailers to collect and enrich data to assist with this personalization process. They have developed a series of intelligent automation tools that help firms collect unstructured data, process it intelligently and then use it to predict individual customer needs. Businesses can combine Robotic Process Automation (RPA) and cognitive bots to collect data, along their employees’ input, which they can use to dynamically create individualized product categories that best suit an individual shopper’s needs.
WorkFusion has written a paper that offers a working knowledge of machine learning and offers case studies showing ways global data operations have radically improve data quality, speed, and ROI.
There has been an explosion in the use of artificial intelligence and predictive analytics by firms trying to turn unstructured data into useful data that they can use to gain an edge on the competition.
| How Can AI and Predictive Analytics Enhance Your Business Performance? | 0 | how-can-ai-and-predictive-analytics-enhance-your-business-performance-10189b9b7250 | 2018-05-01 | 2018-05-01 23:13:35 | https://medium.com/s/story/how-can-ai-and-predictive-analytics-enhance-your-business-performance-10189b9b7250 | false | 718 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Andrew Loader | Freelance writer and content marketer. Mainly writes about marketing & technology, as well as reinventing your life at http://thereinventionmen.com/. | fbd102eae3b2 | andrewloader | 28 | 34 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f161d26ead4a | 2018-09-19 | 2018-09-19 11:20:33 | 2018-09-19 | 2018-09-19 21:06:33 | 5 | false | ja | 2018-09-22 | 2018-09-22 09:26:11 | 20 | 1018d2a3fe7d | 6.801333 | 0 | 0 | 0 | 最近転移学習について議論する場があったが、転移学習についてそもそもあまり理解できていなかったので、理解の整理と備忘録を兼ねてまとめておく。 | 3 | 転移学習とは何か
最近転移学習について議論する場があったが、転移学習についてそもそもあまり理解できていなかったので、理解の整理と備忘録を兼ねてまとめておく。
※2018/9/22追記: わかりやすい説明がちょうどよく公開されていたので、これも読んでおくと良さそう
転移学習 (Transfer Learning) とは
あるドメインのタスクについて学習させた学習済みのモデルがあるとき、関連する別ドメインのタスクに対して学習済みのモデルを利用する手法を転移学習と呼ぶ。学習に使用できるデータが少量しかないタスク (いわゆるコールドスタート問題) に対して、データ量の多いタスクで学習したモデルを利用するのが一般的な転移学習の枠組みである。
転移学習がうまく作用する背景としては、元ドメイン (source domain) と、関連する転移先の目標ドメイン (target domain) 間で低レベルで共通する特徴が存在しているという仮定がある。そのため、何も考えずに適当な学習済みモデルを流用してくれば常にうまくいくというものではない (言うまでもないが) 。こちらの図が分かりやすかったので、引用する。
http://ruder.io/transfer-learning/ より
タスク間で転移学習の効率を詳細に示した研究としては、 Taskonomy (CVPR ’18 Best Paper) が挙げられる。この研究では、複数のタスク間でのモデルの転移効率を可視化することに成功している。(参考)
転移学習の例
pre-trained モデルの活用
一般物体認識のようなタスクにおいては、ImageNet などの大規模なデータで事前に学習した重みを用いることで、少量の訓練データでも十分な性能を得ることができることが知られている。CNN においては、入力層に近い下位層では普遍的な特徴を、上位層ではよりリッチな特徴を捉えていることが分かっているため、限られた上位層のみ再学習すれば凡そうまくいくというのが大雑把な理屈である。
http://cs.brown.edu/courses/cs143/2017_Spring/proj6a/ より
pre-trained モデルを流用する際は、学習データ数が少ない場合は出力層のみを、十分にある場合は上位の複数層を再学習させることが多い。また、重みを完全に固定したり、学習率を小さくして未知の事例を新たに学習させたりと学習の方法も色々なパターンがある。pre-trained モデルの重みを流用して新たなモデルを再学習させる枠組みは fine tuning として広く知られている。
新規ドメインの適用
転移学習のシナリオの一つで、source domain と target domain の観測データは同じであるが、その分布が異なるタスク (=ドメインバイアスが存在する) を指す。Domain Adaptation (ドメイン適用) とも呼ばれる。
一般的な機械学習のタスクでは訓練データとテストデータが同じ分布であることを前提において学習を行うが、実世界のデータではそうはいかない場合が多い。下の例のように、照明や角度はもちろん、色の違いから似顔絵と顔写真の違いに至るまでさまざまな差異を考慮する必要がある。よって、特徴空間上でソースとターゲットの分布を一致させるよう学習することが Domain Adaptation の一般的な目標となる。
Deep Visual Domain Adaptation: A Survey より (CVPR ‘18)
実例としては、大量に集められる CG や Web 上のデータを source に用いて、実空間という target に変換する研究などがある。特に、ターゲットドメインが一切ラベルを持たないタスクは Unsupervised Domain Adaptation (UDA) と呼ばれ、昨今盛んに研究されている分野の一つである。ドメインに不変な特徴量を取得する手法 (ADDA など) が知られている。(参考: CVPR ’18 チュートリアル資料)
ドメイン間の違いを学習
ドメインに固有な特徴とドメイン間で共通的な特徴を分離させるよう学習を行うことにより、ラベルなしのターゲットをうまく識別する手法。StarGAN などの Image-to-Image Translation もドメイン変換を学習させる手法としてこれに含まれる。(これらも実質 Domain Adaptation の一部)
複数ドメインで共通的な表現の学習
ドメイン (タスク) に非固有な複数ドメインで利用できる表現を学習することによって、正解データのないドメインに対応する手法。AutoEncoder や GAN の中間表現を活用する手法がある。
転移学習に関連する領域
学習に使用できるデータが限られるという問題に対して、転移学習以外にも様々な試みが行われている。それらについて以下に簡単に列挙する。 (浅くしか調べていないので色々怪しい)
Semi-supervised Learning (半教師あり学習)
正解ラベルが付与されたデータが少数しかない場合に、それらをうまく活用して正解ラベルがないデータに適用する手法。類似の取り組みとして、正解ラベルなしデータの出力を正解ラベルとして扱い、学習に使用する Data Distillation (データ蒸留) という手法もある。
Weakly-supervised Learning (弱教師あり学習)
出力に必要な情報よりも少ない情報を与える学習の手法。例としては、物体検出のタスクに画像の分類ラベルのみ付与されたデータを用いるなど。
(参考: CVPR ’18 の弱教師あり学習による物体検出のチュートリアル)
Self-supervised Learning (自己教師あり学習)
本来解きたいタスクとは関係のなさそうなタスク (pretext task) で事前に特徴表現を学習させ、学習したいタスクに利用する手法。(参考)
{Few, One, Zero}-shot learning
物体認識のようなタスクでは、ロングテールに属する事例の画像が少量や1枚しかない、もしくは全く存在しないという場合があり得る。このような問題に取り組む枠組みが、{Few, One, Zero}-shot Learning である。物体認識においては、単語の分散表現や WordNet のようなグラフ構造を持った知識ベースを用いる手法などが提案されている。
Multi-task Learning
学習済みのモデルを転移させるのではなく、最初からマルチタスクを解けるモデルを作ってしまうという手法。複数タスクに対応するために、一般的にはモデルを大きくする必要がある。UberNet では、1枚の画像から7タスクを同時に解くモデルを実現している。(デモ)
UberNet より(CVPR ‘17)
Data Augmentation
画像を変形させたり明るさを変えたりすることで教師データを水増しする手法。最近では、モデルの正則化を目的として入力画像をランダムにマスクする Cutout / Random Erasing や、2つの訓練サンプルを合成して新たなサンプルを作り出す Mixup なども活用されている。
参考
Transfer Learning — Machine Learning’s Next Frontier
転移学習:機械学習の次のフロンティアへの招待
| 転移学習とは何か | 0 | what-is-transfer-learning-1018d2a3fe7d | 2018-09-22 | 2018-09-22 09:26:11 | https://medium.com/s/story/what-is-transfer-learning-1018d2a3fe7d | false | 162 | mhiro2's note | null | null | null | mhiro2 | mhiro2 | null | mhiro2_1127 | Transfer Learning | transfer-learning | Transfer Learning | 123 | Masaaki Hirotsu | 見習いエンジニアの学習記録 | f458de949e6b | mhiro2 | 1 | 4 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 39de5d526a38 | 2018-08-19 | 2018-08-19 15:43:47 | 2018-08-19 | 2018-08-19 16:08:19 | 2 | false | en | 2018-08-19 | 2018-08-19 16:08:19 | 5 | 101a8e64d930 | 1.334277 | 1 | 0 | 0 | Hello everyone! | 5 | August 2018 Update
Hello everyone!
It’s Tim here and I hope you are enjoying your Sunday. I just wanted to provide a brief update on Politics + AI and highlight two new articles we’ve published.
First, as some of you may have noticed, I have not written as many articles since July. This is because I recently accepted a new job that limits my ability to write on some topics. However, my goal is to still publish one article a month and will now focus more on the international aspects of AI such as geopolitics, trade, and human rights. My next article will be on the UN and what role it can/should play in the global governance of AI.
Since I no longer have the bandwidth to write on a weekly basis, I’ve recruited new writers in order to regularly publish content. In the past two weeks, we have two new articles:
Abishur Prakash’s “AI-Politicians: A Revolution In Politics” — a deep dive into the political and ethical implications of AI-Politicians.
And Ryan Khurana’s The Artificial Intelligence Paradox — an examination of why the current advances in AI have had a relatively small impact on productivity and economic growth.
If you want to become a writer for Politics + AI, please will out this form and I will get back to you to discuss next steps.
That’s all for now! Don’t forget to follow us on Twitter and Facebook and 👏 our articles so others can find them.
Cheers,
Tim
| August 2018 Update | 12 | august-2018-update-101a8e64d930 | 2018-08-19 | 2018-08-19 16:08:19 | https://medium.com/s/story/august-2018-update-101a8e64d930 | false | 252 | Insight and opinion on how artificial intelligence is changing politics, policy, and governance | null | PoliticsPlusAI | null | Politics + AI | politics-ai | ARTIFICIAL INTELLIGENCE,TECHNOLOGY,POLITICS,GOVERNMENT,AI | PoliticsPlusAI | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Politics + AI | Official account for Politics + AI https://medium.com/politics-ai | 4957d55493ce | politicsplusai | 10 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-04 | 2018-05-04 09:07:04 | 2017-10-13 | 2017-10-13 00:00:00 | 3 | true | en | 2018-05-04 | 2018-05-04 09:16:35 | 3 | 101a9ce85535 | 16.863208 | 13 | 0 | 0 | A short fiction story about future humans being connected to AI, visiting past Homo Sapiens in the Sapien Zoo. | 5 | The Sapien Zoo
A short fiction story about future humans being connected to AI, visiting past Homo Sapiens in the Sapien Zoo.
Illustration by Jonat Deelstra.
Matilda was looking, wondrous, at a butterfly that flew past her golden hair.
She and her parents were almost at the big green arched gates.
Matilda’s mom reprimanded her for being distracted and urged her to attach her eBrain extension.
Reluctantly, Matilda attached her eBrain device to her right temple. Ever since she turned five years old she was mandated to wear it.
Except Matilda was of a curious nature and didn’t want life narrated to her by an electronic device. She found more satisfaction in exploring things on her own.
Last week at school, she found a small children’s book in a closed cabinet in the classroom. There it was on display as if it was an ancient artefact from a different era. More accurately, it was the only book in the room. She opened it and saw different pictures of various animals and people. None of those people were wearing eBrains attached to their temples. Another thing that intrigued her was that the different people depicted had all sorts of skin- and hair color. Not to mention the clothes they were wearing.
She had no idea who or what they were, but she didn’t want the information to be spelled out through that stupid device. And she didn’t want to ask her teacher. Guided by her interest, she decided to take the book home.
That night after dinner she had the courage to ask her mother about these animals and humans. Maybe she could tell her more about them? Matilda was wrong. Her mother was very disappointed in her for not using her eBrain to find out what kind of animals and humans had been depicted. Reluctantly she used her eBrain to learn more, but to her this felt like cheating. It was so easy, so fast. Matilda liked to make an effort to gain knowledge or to have a conversation with someone. Not with a machine.
Her father watched, worried. He and his eBrain analyzed Matilda’s state, her body language, her expressions and her emotions. He felt for her and decided to take the family to the Zoo, the Sapien Zoo.
S o there they were mom, dad and Matilda. Standing in front of the gate. At exactly 10.00am the gates opened.
A group of other visitors had gathered around Matilda and her family. All visitors had the same blank, somewhat bored look on their faces. Their eyes were wandering, focused on nothing in particular in the outside world, but intent on something in their inside world — the world being projected via their eBrain and lens extensions.
A hologram of a tall, dark haired woman appeared, wearing black overalls. She introduced herself as their guide and instructed her visitors to keep their eBrains on at all times, not to get lost, and to be cautious of the manipulative Homo Sapiens.
She gave Matilda a stern look, suspecting Matilda would be the one who would cause trouble. The hologram could see from her log files that Matilda had only just attached her eBrain and that she wasn’t an avid user.
The hologram spoke: “Welcome my fellow Humai, my name is Lauren and I will be your guide today. Please set your eBrain to ‘share modus’ with my account in order for me to see and hear your questions and thoughts. Welcome to the Sapien Zoo! I am very pleased to take you on a trip through time and show the development of our ancestors — or lack thereof.”
Some of the visitors giggled, suddenly awake from their bored trance.
“As you all know,” Lauren continued, “we were once Homo Sapiens. Until the Technological Age, men hadn’t evolved substantially since the Neanderthals. In this park we would like to show you our ancestry through the evolution of the Genus Homo, to what we are now.
Yes, Matilda, thank you for your question, at the end of the tour you will be able to see the animals in the Zoo.”
Matilda blushed, as if she was caught doing something she was not supposed to.
Lauren went on with her explanation: “The Sapien Zoo functions as a research center, looking into our basic behavior and traits, but also educating the young and the interested.”
The hologram looked at Matilda again and gestured to the group to follow her to the first stop: The Maasai people of Tanzania.
Behind a see-through fence, with lightning bulbs floating around in the air, Matilda saw a group of people with darker skin than she had. The people wore red cloths, wrapped around them with different patterns and stripes in black. Around their necks, arms and even their heads they wore different sorts of jewelry, all made of beads and some with feathers. Matilda was mesmerized by their beauty.
An elderly couple in the tour group pointed at the Maasai family, the man whispering something to the woman, they both laughed.
Lauren explained to the group that the Maasai originated around the Nile in Africa, that they had their own language and their own God. For more information, they could read further from their retina lens, connected to their eBrain.
Matilda heard all this in awe. It was the colors that attracted her most. Why did she not wear such beautiful clothes and necklaces?
Suddenly a man of the family threw a spear towards the group. The Humai ducked, but weren’t in danger, for the spear bounced at the invisible, electric fence.
Only a few visitors noticed of course, most were distracted by the information projected before their eyes.
The hologram reassured the Humai that no one could get hurt in this park. Its facilities and protection mechanisms used state of the art technology.
Matilda walked closer to where the invisible wall had been just before. Her mother looked worried and irritated and started to walk towards her daughter, but her husband stopped her and urged her to let Matilda roam a bit and feed her curiosity.
Matilda had not seen these Maasai people in the book she found. Matilda looked at the family. They stood there, in a proud stance before their hut. A father, mother, son and daughter, and at the entrance of the hut, a grandmother. At a closer look, they all had red paint on their faces.
The hologram was talking again in Matilda’s ear. In the corner of her right eye information and statistics appeared about these Maasai people. It only distracted Matilda, so she switched off her eBrain. The Maasai people stared at her. Matilda giggled and waved and shouted that they looked beautiful.
At first nothing happened, to Matilda’s great disappointment. But then she had an idea. In her ballet class she had just finished learning a new choreography. Matilda started dancing, with her typical poise and passion. The Maasai moved closer, curious in their own right as to what the girl was about to do.
The group behind Matilda watched her. Again her mom wanted to interfere, but her dad stopped her. Then, for a moment, nothing happened as if everyone was unsure how to respond to what just occurred.
Matilda looked at the Maasai daughter and smiled. The Maasai daughter looked at her mom, who nodded. Then, she danced as well. The little girl danced in a way Matilda had never seen before. When she was finished, Matilda clapped loudly. But soon she was roughly interrupted in her cheer by someone pressing her eBrain to her temple. Her mom.
Matilda heard the hologram say, that the dancing was highly unusual, as these Maasai were mostly violent and very primitive. She apologized for their behavior.
Matilda’s parents apologized to the other visitors in the group and to the hologram.
Disappointed, Matilda waved to the Maasai girl, her head hanging low. The girl waved back, but Matilda could not see it.
Lauren continued the tour. She explained that the next culture they were going to visit were the Chinese people — one of the most ancient human races of all time.
Matilda looked at a family sitting on the ground, cross-legged, in front of a low dinner table. They ate with something that looked like sticks. Matilda thought it was odd that this family was eating together, all at once, gathered around a table. She had never done that with her parents. They always had their pre-packaged, health optimized meals or shakes. Although everyone took them at set times, they never consumed them together like this Chinese family was doing now. Matilda asked her mom if they could eat with sticks sometime, at which her mother laughed and stroked her hair.
In the corner of her eye, Matilda was informed that the Chinese were once the biggest population on earth, until the Great Decay. She was baffled at the number she saw. She had never seen that many zeros. She decided to ask Lauren what the numbers meant.
Before Matilda could speak, Lauren was already aware of Matilda’s question. “Thank you for that interesting question, Matilda, good to see you have been studying the information on your eBrain. You could have also asked your software device, but I am glad I can publicly answer your question, young lady. That number is a billion.”
Some of the Humai in the group gasped and mumbled.
Although she did not grasp the meaning of that number, she nodded, understanding that it must have been quite a lot.
Outside the house where the Chinese family was eating, two elderly men were playing a game consisting of white stones with black signs on them. They had a smoking stick in the back of their mouths. Matilda zoomed in and learned that those things were cigarettes and that cigarettes killed many people back in those days. That was all before the Great Decay.
Matilda heard her teacher mention the Great Decay at school once. But the teacher had not explained it to her class, they were just too young for that kind of information. Matilda decided to look it up. A voice in her head explained in a censored version, especially designed for children aged 6–10:
The Great Decay happened just over 80 years ago when a certain dictator released chemical gas. This chemical gas had a bad effect on people. They would experience great pain, which lead to death. First the dictator did it in his own country, to decimate the population and to ‘get rid’ of the inhabitants he felt were not part of his ideology. However, his most important motivation was to suppress his enemies. Over three quarters of its population died. A disease that was caused by the gas turned out to be contagious and it spread to other countries, the effects disastrous to the Homo Sapiens…
Matilda gasped and shrieked.
Her dad came up to her, synching his eBrain to hers. He gasped as well. So did the group and Lauren the hologram, for they tuned in to what Matilda was reading just before. It was not everyday one saw a child being so audacious to read about the Great Decay.
Even the Chinese people stopped eating and looked over in their direction.
“Matilda, how in the love of Tech did you come across that subject?” her dad asked.
The other Humai in the tour group were catching up with their devices, nodding their heads in disapproval. Matilda’s mom defended her daughter, praised Matilda for her interest and courage to be educated on such a horrendous subject in order to better understand the world around her. Another woman agreed with Matilda’s mom, proudly stating that she too had granted her children access to such information as they first started using their eBrains, to enhance their experience in this world. Her children were standing right beside her, although they were much older than Matilda.
Still shaken, Matilda moved on with the group.
The next group of Homo Sapiens they visited were sitting in front of their house, in a small garden, filled with all sorts of plants and flowers. Bushes of grapes surrounded the premises. The house had a grand appearance, dignified even. All windows were opened wide, curtains flapping outside. A man and a woman sat outside, drinking from a glass with a stem, filled with dark red liquid. The woman wore a long, light blue floral dress. The man was wearing a pine green polo, with a dark brown jumper loosely wrapped around his neck. He wore a hat and glasses and he had a sharp moustache. The table was filled with all sorts of food Matilda had never seen before, such as long, cylindrical dark brown things, which the woman was cutting into slices. The man spread a white-yellow, creamy substance on it. There were many bowls filled with food in all shapes and colors. Matilda was mesmerized by the sight and tempted by the smell of it.
“Ah, bienvenue, we’ve arrived at the French,” Lauren said.
Matilda saw how the French woman leaned in on her husband and pressed her lips on his cheek. Amazed, she let her eBrain analyze this gesture. It was known as a kiss.
However, that was all the information that was displayed. Matilda felt a strange feeling in her stomach, as if it was jumping around. She smiled. She looked back at her mom and dad. Her mom was looking irritated and was staring at nothing in particular. Her dad was looking from the French people to her mom, and shrugged.
The other Humai laughed at the sight. The voice of Lauren awoke Matilda from her gaze.
The group continued their tour past a small Dutch farm, which had a wooden mill, surrounded by green land with ditches around the edges. On it was a small house with a large shack at the back of it. Cows were grazing on the green grass, chickens were running around. Farmers were going about their work.
One farmer standing at the edge of the ditch started shouting at Matilda and the other visitors. She could not understand what he was saying.
A t lunchtime the group walked past a Spanish hacienda. A man was sleeping on a bench with a hat on his face, protecting him from the sun. In the shade lay a woman and a baby, also both asleep.
For lunch, Multibars were handed out by a small robot, hovering through the group, making sure everyone got their feed.
Matilda let her eBrain explain a bit more about Spanish culture. She read that this Spanish family was having a short afternoon sleep, known as a ‘siesta’.
Abruptly, she was disturbed by loud singing and shouting. Opposite the Spanish cage, a merry group of men and women were dancing around in a circle, singing loudly in a long forgotten language. They were wearing thick layers of hairy clothing and most of them wore a hairy hat. The hats were black and brown. Some were of a mixture of grey and white.
These people were drinking from bottles holding transparent liquid. They clunked the bottles together, after which they all took a big gulp of the liquid. A few men let out a loud burp. Matilda laughed. Her eBrain analyzed the party of Homo Sapiens — they were known as Russians.
Matilda’s laugh drew the attention of one of the men, who walked from the group towards the fence. Since the fence of the cage was invisible, it was quite scary to Matilda to see that man walking up to her, getting so close. The man started shouting and pointing at Matilda and the tour group. Matilda ran to her parents and held on tightly to the fabric of her mother’s dress.
Other Russians soon joined the man, all shouting and making weird sounds. One man threw his bottle towards the group. The fence bounced the bottle back but the gesture was startling. The visitors anticipated a thwack on the head.
Lauren called on a few robots to taser the Russians with a blue electrical wave. Immediately the noise stopped. The Russians walked back to the dancing room, dazed, and after a few seconds bursted out singing again like nothing happened.
Lunch was over and they continued their tour. Everyone still feeling a little uneasy.
Lauren explained to the group that ever since the invention of the eBrain, the Humai had surpassed the Homo Sapiens in many ways. The Humai were a more intelligent and efficient species. Matilda saw a few statistics popping up in the corner of her eye. She could not be bothered to read them, as she was upon the next cage.
She saw a boulevard with a couple of benches in front of a river. Above the river was a big brick bridge that connected the boulevard to the other side of the river. In the distance she saw some old-fashioned vehicles driving both ways on the bridge. Across the river she saw all these big stone buildings in different shapes and sizes. Some of them even reached the clouds!
It started to get dark in the cage and Matilda was intrigued by all the little lights she saw in those buildings. What a magnificent view!
Sitting on the benches people were looking at the view. All people sat with their backs to Matilda and the group. Matilda heard their guide explaining something about how the Great Decay had also effected the city these people were looking at in the distance. A city once known as New York City. Little did the people on the benches know that this city was a projection of what once was.
Matilda did not take notice of the story and she ignored the numbers and statistics appearing in the corner of her eye. She was charmed by a man who had put his arm around a woman, kissing her. Matilda’s eyes widened. On the next bench sat a family, with a little boy and girl running around the bench chasing each other. She wanted to play like that!
On another bench an elderly couple were both reading a book while holding hands. The people looked peaceful and loving. Matilda had tears in her eyes, but she had no idea why.
She looked back at the tour group. They stood there with their back to Matilda as well, looking at a see-through woman named Lauren.
Matilda was split between the people in the cage and the people looking at the cage, not really belonging to either one.
She sighed.
Suddenly, a woman in the tour group started screaming, pointing to the cage with the Russians.
As Matilda and the group watched, a Russian man went through the fence of the cage. Confused, the visitors looked at where the hologram had stood before. With a few flickers Lauren was gone.
A man shouted that he couldn’t see or hear anything. His eBrain didn’t work.
This triggered the whole tour group to check theirs. All heatedly waving in the air, agitated and pressing their eBrains.
Matilda attached her eBrain again, she neither saw nor heard anything. Utter silence. She could not help but feel as if some weight was being lifted from her.
She looked around to find out what had caused the sudden power failure, but she wasn’t able to find anything out of the ordinary.
The other Russians saw that one of them had moved outside of their designated area. Soon, they rapidly followed his example and went through the fence, drinking and singing. The visitors ran away, afraid.
Matilda felt a hand on her shoulder. When she looked behind her, she saw the girl who had just been chasing the boy at the boulevard. The girl waved and smiled, then ran away.
Then everything happened fast. One of the Russian men started threatening a man from the group of visitors with his bottle held high above his head. He was shaking it dangerously. Suddenly and with one swing the Russian hit the visitor on the head with the bottle. More of the Russians started shouting and approaching the group threateningly.
The visitors panicked, the group split up and everyone was running in different directions. Matilda followed her parents. On her way through the Zoo she saw more and more of the people who had once been locked up in a cage, now roaming about. The Spanish were awake and were walking curiously towards the Dutch farm. The Dutch farmer was leading his flock of cows and sheep outside of the fence, over a small wooden plank he placed over the ditch.
She saw the robots that had just tasered the Russians less than thirty minutes ago. There stood motionless, in their last active pose.
What had happened? How did all electronic devices suddenly stop? From the fences to the robots, even everyone’s eBrain. Was it just in the Zoo, or was it everywhere? Unfortunately, there was no voice inside their brains to explain the situation.
More people were running around, Humai and Homo Sapiens alike. But Matilda could not easily make out who was who. She could only pick out the Humai who were still wearing their eBrains and who were looking distressed.
Matilda looked at her mother, who was desperately pressing the button on her eBrain, trying to re-activate the device. Her dad was more calm and was closely watching his family, assessing the situation and considering a safe way out. He yelled at his wife, grabbed her eBrain and tossed it in the ditch of the Dutch farmer. He took both her hand and Matilda’s and started walking towards the gate. Both held on tight.
A giraffe walked past, clearly lost. Matilda was pointing excitedly at the animal. She recognized it from the book she had taken from school. Her parents slowed down and looked at the animal. For the first time that day, they both smiled at Matilda.
As they continued their way out of the park, the family encountered more of the other cultures, some calm and curious, some wild and furious.
Men and women with long dark hair and light brown skin were running towards them, spears in their hands. They all wore one or more feathers in their hair. One of them threw an arrow, luckily not aimed in their direction, it landed on the brick floor.
Matilda’s dad became frightened for his family’s safety, unsure where to go to.
Matilda felt another hand grabbing her remaining free hand. The hand was small like hers. She looked to her side. There was the Maasai girl, smiling at her. The Maasai girl spoke, but Matilda could not understand her. She pointed at her hut and pulled Matilda and her family towards it. It must have been both a comical and sweet sight to see two adults and two children walking hand and in hand towards the hut, careful not to let each other go. Cows, chickens, even monkeys and other animals were either running past them or were surrounding the hut.
The Maasai girl approached her father and talked to him, pointing at Matilda and her family, making wild gestures. Matilda thought she asked her father to protect them. As the Maasai father and daughter spoke, the Maasai girl didn’t let go of Matilda. At one point she was even standing in front of Matilda, trying to make herself a bit bigger as if to shield Matilda.
Matilda’s mom suddenly screamed: “Tiger, tiger! Over there!” Behind them, a tiger approached them threateningly, sizing up his prey.
Above their heads, a spear flew. Matilda followed its course as it hit the tiger right in the chest. The beautiful creature fell down and breathed his last breaths. Matilda and her family gasped.
The thrower was the Maasai father, who stood there aggressively before Matilda’s family and his own daughter. The Maasai man called on his son, who joined them, spear in hand.
Both Maasai men stood on either side of Matilda and her family. The Maasai girl was still holding Matilda’s hand. They protected the family and defended them from the wild and furious animals and Homo Sapiens as they made their way to the park’s exit. As they reached the exit, the Maasai people stopped. The girl smiled at Matilda, squeezed her hand and nodded. Matilda did the same.
Matilda’s family walked out of the park, free again. Perhaps even a little more than when they had entered.
Looking everywhere, noticing changing elements in the world they were a part of. Buildings were just bricks and windows. No artificial information or images were projected on them anymore. Neon signs and holo projections were gone. None of the apartments around them were lit. A tram stood still in its tracks. Cars stayed in their position on the road. Its passengers wandering, confused and agitated. It seemed electricity was not only out in the Zoo, it was nowhere to be found. Something big was going on.
It was as if they had gone two centuries back in time. How would the Humai cope?
Homo Sapiens from the past were running among Matilda and her parents, escaping, exploring their ‘new’ world. Animals were creeping out and about, unsure of these brick surroundings.
Matilda looked up at skyscrapers, large banners were being rolled out. They showed the face of a man she had learnt about in class. However, she could not remember who he was. Her mom was shocked and covered her mouth with both her hands.
The banner signs read: We have gone too far. Let us start over. Let us be equal.
Liked this story? Please share it with someone who might like it too. It would be much appreciated :-).
Prefer to listen to short fiction stories while you’re commuting, walking, running or cooking? Listen to the Turner Stories Podcast.
Check out the Turner Stories Podcast in iTunes.
Originally published at www.turnerstories.com.
| The Sapien Zoo | 171 | the-sapien-zoo-101a9ce85535 | 2018-07-18 | 2018-07-18 21:28:10 | https://medium.com/s/story/the-sapien-zoo-101a9ce85535 | false | 4,323 | null | null | null | null | null | null | null | null | null | Short Story | short-story | Short Story | 94,626 | N.A. Turner | Writes Black Mirror-esque short stories. I share tips about my writing journey. Get my free eBook “Successfully Develop Your Writing Career”: bit.ly/TurnerMail | ab5ac067114a | turnerstories | 2,313 | 226 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-23 | 2018-04-23 17:31:10 | 2018-04-23 | 2018-04-23 17:32:45 | 0 | false | en | 2018-04-23 | 2018-04-23 17:37:09 | 2 | 101ba28bf6e1 | 5.418868 | 0 | 0 | 0 | Poem:
Even though I walk through the valley of the curse of dimensionality,
I will fear no overfitting,
for you are with me;
your bootstrap… | 5 | Application of Random Forest Algorithm in Remote Sensing Imagery Classification and Regression
Poem:
Even though I walk through the valley of the curse of dimensionality,
I will fear no overfitting,
for you are with me;
your bootstrap and your randomness,
they comfort me.
You prepare a prediction before me
in the presence of complex interactions;
you anoint me data scientist;
my wallet overflows.
Credit: (http://machine-master.blogspot.com/2014/02/random-forest-almighty.html)
According to Wikipedia, “Random forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees’ habit of overfitting to their training set.”
Brieman (2001) was the first person to propose random forests algorithm formally. It combines the concept of random subspaces and bagging. As the name implies, RF algorithm creates the forest with a number of decision trees. More decision trees (trees in short) mean more robust classification leading to high classification accuracy. Decision tree concept is based on rule-based system.
For example, we may want to classify the vegetation of East Africa using rules. In ArcGIS the conditional statement or rule would be based on precipitation(prec), the ratio of actual evapotranspiration to potential evapotranspiration(aetpet), temperature, vegetation growth days, etc.
Con(“aetpet” < 0.45 & “prec” < 125,6,Con( “aetpet” >= 0.45 & “petprec” > 1,5,Con( “prec” >= 125 & “petprec” < 1,4,Con(“aetpet” >= 0.45 & “mtcm” < 17,1,Con(“mtcm” >= 17 & “mtcm” < 20 & “gdd5” < 4000,1,Con(“mtcm” >= 20,1,Con(“mtcm” < 20 & “gdd5” >= 4000 & “aetpet” < 0.18,2,Con(“aetpet” > 0.18,3))))))))
Accordingly, we would be able to classify the potential natural vegetation types in East Africa, apart from cropland, using the above decision trees.
1. Forest and montane communities.
2. Moist Woodland.
3. Dry Woodland.(AET/PET>0.18)
4. Semi-Arid (non-semi desert).(AET/PET<0.45, PET/P <5 and Precipitation less than 700 mm)
5. Semi-desert. ( AET/PET<0.45, PET/P < 5, and Precipitation less than 500 mm)
6. Desert.( AET/PET <0.45 and PET/P >= 5)
Now we understand rule-based classification, let’s see how random forests are made. Given the training dataset with polygons of multiband image for each class (i.e. labelled data) , the decision tree algorithm will define the set of rules. The same set rules are used to test beyond the training data polygons on test datasets which are 1/3 of all sampled pixels.
The random forests algorithm trains a number of trees on slightly different subsets of data (bootstrap sample), in which a case is added to each subset containing random selections from the range of each variable. This group of trees are similar to an ensemble. Each decision tree in the ensemble votes for the classification of each input case.
Ensemble classifiers in RF are a group of classifiers or randomly created decision trees. Each decision tree is a single classifier and the target prediction is based on the majority voting method. Every classifier will vote to one target class out of all the target/output classes. The target class which got the most number of votes considered as the final predicted output class.
Random Forest (RF), as an ensemble learning method uses several weak classifiers for discrete data to classify discrete entities. For example, a given number of land-cover classes (built-up, vegetation, soil, water bodies) are classified from a multiband satellite imagery based on their distinct spectral properties in the different bands of the satellite sensor.
Input: Multiband imagery Output: Land-use land-cover classes
Let’s say the number of classes we want in land-cover classification system is five. First, we label some areas in the image as cropland, soil, built-up, forest, and water-body based on visual interpretation of their spectral reflectance properties. For example, in a false color composite (FCC) satellite image (For example: bands 543 in Landsat 8) forests are dark red, water bodies are black (absorb infrared). In addition, band ratios are used to enhance the difference between land covers. Built-up areas (NDBI: Normalized Difference Built-Up Index), vegetation and soil (SAVI: Soil Adjusted Vegetation Index) and water bodies (MNDWI: Modified Normalized Difference Water Index) are identified visually from the band ratio image.
We draw polygons of these land cover classes which are used to train the random forest model. This process is called supervised classification which means that the machine(PC) is trained to know what digital numbers (DNs) are represent each land cover and apply this knowledge to the rest of the image. Although this is a humanized version of a decision tree, the questions for the tree classes are:
Ø If a sampling polygon has high IR (Infrared Radiation) reflectance, then it may be a forest.
Ø If a sampling polygon has low reflectance in IR, then this may be a water body.
Ø If a sampling polygon has high NDBI, then this may be a built-up area.
Ø If a sampling polygon has a high SAVI, this may be a cropland, if has low SAVI then this may be soil.
On the other hand, in regression, predictors of vegetation biomass/ area/year in a given location may be regressed. Given the dependent variable (biomass) and several independent climatic predictors such as rainfall, temperature, vapor pressure, solar energy (continuous variables) and land-cover (discrete variable as dummy), we would be able to know how much of the variance (in percent) in biomass between places is explained by these dependent variables (See my previous posts).
The process flow of random forest algorithm
A random sample of the number of cases is taken, this may be compared with individual pixels in a multiband image. Subsequent samples for other trees are done with replacement (a pixel may be selected once or more and no pixel may be left out from the sampling).
A subset of variables (the number of which is represented by the term m) is chosen, being much less than the number of variables, and the best split (based on the Gini score) is determined on this subset of variables. Choose values of m that are not too low or too high. The m-number is the only setting in the algorithm to which the model is sensitive. i.e. The number of variables used in the algorithm largely determines the classification accuracy. The more variables we use the more the correlations between decision trees (bad) and the more the classification accuracy (less misclassified pixels). Decreasing m decreases the correlation between classification trees but will be less accurate or its predictive power diminishes.
From the sampled pixels, about one-third (1/3) of the cases are used to create testing data set. These pixels have no labelled class assigned to them. This testing data set is used to compute the error rate of prediction of the class. The average error rate is calculated from all trees built.
The importance of each band or band ratio is calculated by running the out-of-sample data set down the tree, and then the number of votes for the predicted class are counted. Then the values in each variable are randomly changed and run down the tree independently. The number of votes for the tree with the changed variable is subtracted from the number of votes for the unchanged tree to yield a measure of effect. Finally, the average effect is calculated across all trees to yield the variable importance value.
Advantages of Random Forests
1. The random forests algorithm has relatively high accuracy among algorithms for classification.
2. It is less likely to overfit as you increase the number of variables (i.e. model failing to generalize from the training data).
3. It can handle hundreds (or even thousands) of variables and large data sets.
4. It yields each variable’s importance like neural nets.
5. It has a robust method for handling missing data. The most frequent value for the variable among all cases in the node is substituted for the missing value.
6. It has a built-in method for balancing unbalanced data sets (one class less frequent than other classes).
7. It runs fast! Hundreds of trees with many thousands of cases with hundreds of variables can be built in a few minutes on a personal computer.
I have added some context to remote sensing in random forest classification. I hope you had gained something. If you need clarity or require further description please let me know at [email protected].
Reference:
Handbook of Statistical Analysis and Data Mining Applications, Chapter 11, Classification, Elsevier Inc.
http://dataaspirant.com/2017/06/26/random-forest-classifier-python-scikit-learn/
| Application of Random Forest Algorithm in Remote Sensing Imagery Classification and Regression | 0 | application-of-random-forest-algorithm-in-remote-sensing-imagery-classification-and-regression-101ba28bf6e1 | 2018-04-23 | 2018-04-23 17:37:10 | https://medium.com/s/story/application-of-random-forest-algorithm-in-remote-sensing-imagery-classification-and-regression-101ba28bf6e1 | false | 1,436 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Kaleab Woldemariam | null | f11e22293e02 | gis10kwo | 12 | 29 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | ec10e05abbed | 2018-06-15 | 2018-06-15 09:58:53 | 2018-06-15 | 2018-06-15 10:17:45 | 15 | false | zh | 2018-06-15 | 2018-06-15 10:17:45 | 7 | 102049de6888 | 4.98 | 0 | 0 | 0 | 各位关注Cortex项目的小伙伴大家好, | 3 | Cortex项目进度报告<20180615第六期>
各位关注Cortex项目的小伙伴大家好,
Cortex第六期项目进度报告新鲜出炉,
和大家介绍近两周Cortex项目的最新动态。
端午佳节,Cortex Labs团队所有成员
祝福所有支持者、投资者阖家欢乐!砥砺前行,风雨同舟。
社区建设
(图注:数据来自Coinmarketcap )
截止2018年6月15日,参考Coinmarketcap数据,CTXC流通市值为:117,456,461 USD/17,754 BTC/228,893 ETH。
CTXC持仓地址数量达到20667个,交易量达到33321笔,持仓数量和交易笔数均稳中有升(数据来源:Etherscan)。
空投活动持续中,社区热度进一步提升。近期空投活动中出现的bug得到修复,社区热度进一步稳定增长,社区环境得到改善。
扫描下方二维码获得CTXC空投教程:
参与Cortex的空投活动赢取CTXC奖励入口:
pc端链接:http://t.cn/R3AvZ9V
或扫描以下二维码:
截止6月15日下午,
Twitter粉丝关注人数增长至22.1K ;
Telegram英文群组人数增长至81K+;
Telegram中文群组人数增长至26k+;
Reddit关注人数增长至6K;
微博自建立之后的一个月里从零增长至2.2K。
登陆SWFT Blockchain
2018年6月5日,Cortex正式登陆一站式转账平台SWFT Blockchain。被硅谷誉为“下一代区块链全球转账协议”的SWFT Blockchain运用区块链,机器学习,大数据,实现了费用低廉,安全迅速的币币兑换。
媒体曝光和线上线下活动
日韩之行
2018年6月7至8日,Cortex Labs团队受邀参加韩国首尔举办的Blockchain Korea Conference。项目创始人兼CEO陈子祺在会上发表了关于AI+Blockchain的演说,并和多位项目创始人在会议研讨环节进行对话。次日,Cortex举办了韩国市场首场线下Meet Up活动,吸引了当地投资机构,学界和媒体等多方参与。CEO同时接受了韩国当地权威媒体Asia Economy TV的特邀专访,对媒体提出的各类技术层面问题进行回答,并介绍了团队情况。Cortex团队同样前往日本拜访了当地人工智能行业企业,商讨相关合作事宜。
美国之行
2018年6月11至12日,Cortex Labs受邀在美国硅谷举办的CPC Crypto Developers Conference会议上发表演讲并参与会议专题讨论,同时该会议吸引了来自全世界数千名开发者的参与。在会议上, Cortex团队会见了项目学术顾问2015年图灵奖获得者Whitfield Diffie教授。
台湾之行
2018年6月14日的OKEx全球Meet Up系列活动台北站里,Cortex团队同样受邀参与,并进行了Cortex项目介绍。
Reddit AMA
2018年6月8日北京时间上午9:30–12:30,Cortex团队在Reddit上与Cortex国际社区的项目粉丝进行了AMA(Ask Me Anything)互动。这也是继4月份的首场Reddit AMA后,2018年Cortex与社区进行的第二场问答互动。在Reddit的AMA活动中,全世界各国的社区粉丝皆可进行参与,项目方必须在活动进行期间对粉丝们提出的各类问题进行一一解答。
主流媒体报道
Cortex在美国市场同样获得了各大主流媒体的报道,其中包括NBC,CNN,CBS,China Daily等,一些媒体将Cortex项目评价为“The First Next Gen(新一代技术中的先驱)”。
联系我们
网站:http//www.cortexlabs.ai/
Twitter:https//twitter.com/CTXCBlockchain
Facebook:https//www.facebook.com/cortexlabs/
Reddit:http//www.reddit.com/r/Cortex_Official/
Medium:http//medium.com/cortexlabs/
电报:https//t.me/CortexBlockchain
中国电报:https//t.me/CortexLabsZh
| Cortex项目进度报告<20180615第六期> | 0 | cortex项目进度报告-20180615第六期-102049de6888 | 2018-06-15 | 2018-06-15 10:17:47 | https://medium.com/s/story/cortex项目进度报告-20180615第六期-102049de6888 | false | 70 | AI on Blockchain - The Decentralized AI Autonomous System | null | CTXCBlockchain | null | Cortex Labs | cortexlabs | AI,BLOCKCHAIN,CRYPTOCURRENCY,CTXC,CORTEXLABS | CTXCBlockchain | 区块链 | 区块链 | 区块链 | 617 | Li-Qing Wang | null | fc2d99b563f9 | liqingnz | 2 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-12 | 2018-06-12 12:14:43 | 2018-06-12 | 2018-06-12 12:49:24 | 6 | false | ru | 2018-08-23 | 2018-08-23 16:12:41 | 9 | 10204a83c0ab | 3.900943 | 1 | 0 | 0 | Репозиторий с общим кодом — одна из фундаментальных идей в разработке программного обеспечения. Библиотеки делают программистов гораздо… | 5 |
Введение в TensorFlow Hub: библиотеку модулей машинного обучения для TensorFlow
Репозиторий с общим кодом — одна из фундаментальных идей в разработке программного обеспечения. Библиотеки делают программистов гораздо более эффективными. В некотором смысле они даже меняют сам процесс решения проблем программирования.
Как выглядит идеальная библиотека с точки зрения разработчика алгоритмов машинного обучения?
Мы хотим делиться предобученными моделями. Опубликованная с открытым кодом обученная модель позволит программисту, не имеющему доступа к вычислительным ресурсам или закрытому датасету, настроить ее и использовать в своей задаче. Например, обучение NASNet занимает тысячи GPU-часов. “Поделившись” подготовленными весами, разработчик упростит коллегам настройку модели для работы.
Идея подобной библиотеки для машинного обучения вдохновила нас на создание TensorFlow Hub, и сегодня мы рады поделиться ею с сообществом. TensorFlow Hub — это платформа, где можно публиковать, изучать и использовать модули машинного обучения, написанные в TensorFlow. Под модулем мы подразумеваем самодостаточную и обособленную часть графа TensorFlow (с обученными весами), которая может быть использована в других задачах. С помощью модуля разработчик cможет обучить модель на меньшем датасете, улучшить способность обобщать или просто увеличить скорость обучения.
Image Retraining
В качестве первого примера рассмотрим технику обучения классификатора изображений на небольшом датасете. Современные модели распознавания содержат миллионы параметров и весов, вычисление и разметка данных требует времени. Технология Image Retraining позволит обучить модель в условиях ограниченного набора данных и времени для вычислений. Вот как это выглядит в TensorFlow Hub:
Основная идея в том, чтобы использовать существующую модель распознавания изображений для получения признаков, а затем обучить новый классификатор. Отдельные модули TensorFlow Hub доступны по URL (или в файловой системе), включая модификации NASNet, MobileNet (v. 2), Inception, ResNet и другие. Для использования модуля, нужно импортировать TensorFlow Hub, а затем скопировать и вставить URL модуля в код.
Модули для обработки изображений
Каждый модуль соответствует определенному интерфейсу, их можно менять и без знаний о внутреннем устройстве. В примере есть метод, позволяющий извлекать ожидаемый размер изображения. Разработчику остается предоставить набор изображений правильного размера, и подать его на вход модулю получения признаков, который самостоятельно осуществит препроцессинг изображений. Это позволит перейти от изображений к признакам практически за один шаг. Уже после этого можно обучать линейную модель или другой подобный классификатор.
Обратите внимание, что модуль в примере размещен компанией Google и представлен в нескольких версиях (выбирайте стабильную версию для экспериментов). Модули можно применять как обычные функции Python для построения графа. Будучи экспортированным на диск, модуль становится самодостаточным, и его можно использовать без доступа к коду и данным, с помощью которых он был создан и обучен (хотя, разумеется, их тоже можно опубликовать).
Text Classification
Давайте взглянем на второй пример. Представьте, что Вы хотите обучить модель для классификации отзывов о фильмах на положительные и отрицательные, но в распоряжении имеется небольшой набор данных (например, 400 отзывов). Поскольку примеров немного, есть смысл задействовать эмбеддинг слов, предварительно обученный на гораздо большем массиве слов. Вот как это выглядит с использованием TensorFlow Hub:
Как и раньше, начинаем с выбора модуля. На TensorFlow Hub вы найдете модули для обработки текста для различных языков (английский, японский, немецкий, испанский), word2vec, обученный на Википедии, а также NNLM эмбеддинг, обученный на Google Новостях:
Мы будем использовать модуль для эмбеддинга слов. Код выше загружает модуль, использует его для предобработки предложения, затем вызывает эмбеддинг от каждого токена. Это означает, что за один шаг Вы можете сразу перейти от предложения из Вашего датасета к подходящему для классификатора формату. Модуль сам заботится о разбиении предложения на токены, а также о таких аспектах, как обработка слов, не входящих в словарь. И препроцессинг, и эмбеддинг уже реализованы в модуле, что упрощает экспериментирование с различными датасетами эмбеддинга слов или различными стадиями предобработки, без необходимости постоянно править код.
Если хотите сделать все самостоятельно, а также узнать, как TensorFlow Hub взаимодействует с TensorFlow Estimators, читайте руководство для начинающих.
Универсальный энкодер предложений
Представляем пример использования универсального энкодера предложений. Это эмбеддинг на уровне предложений (sentence-level), обученный на обширном наборе датасетов (отсюда “универсальность”). Среди задач, с которыми он справляется хорошо — семантическое сходство, классификация текста, кластеризация.
Так же, как и в image retraining, здесь не требуется большого набора размеченных данных для решения задачи. Рассмотрим работу энкодера на примере отзывов о ресторанах:
Чтобы узнать больше, посмотрите этот туториал.
Другие модули
TensorFlow Hub — это больше, чем просто классификация текстов и картинок. На сайте вы найдете несколько модулей для Progressive GAN и Google Landmarks Deep Local Features.
Важные замечания
Во-первых, не забывайте, что модули содержат исполняемый код, поэтому загружайте и используйте их только из доверенных источников.
Во-вторых, нужно быть объективным, оба примера, которые мы показали выше, используют предварительно подготовленные большие наборы данных. При повторном использовании набора данных важно помнить о том, какие ограничения он содержит, и как это может повлиять на продукт.
Надеемся, что TensorFlow Hub пригодится в вашем проекте! Посетите tensorflow.org/hub, чтобы начать использовать библиотеку.
https://neurohive.io/ru/frameworki/vvedenie-v-tensorflow-hub-biblioteku-modulej-mashinnogo-obuchenija-dlja-tensorflow/
Оригинал — Josh Gordon, перевод — Эдуард Поконечный.
| Введение в TensorFlow Hub: библиотеку модулей машинного обучения для TensorFlow | 10 | введение-в-tensorflow-hub-библиотеку-модулей-машинного-обучения-для-tensorflow-10204a83c0ab | 2018-08-23 | 2018-08-23 16:12:41 | https://medium.com/s/story/введение-в-tensorflow-hub-библиотеку-модулей-машинного-обучения-для-tensorflow-10204a83c0ab | false | 782 | null | null | null | null | null | null | null | null | null | Library Usage | library-usage | Library Usage | 24 | NeuroHive Ru | Блог переехал на Хабр -https://habr.com/users/neurohive/posts/ | 5996154fc7bf | neurohiveru | 77 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-14 | 2018-07-14 04:04:10 | 2018-07-17 | 2018-07-17 03:46:13 | 2 | false | en | 2018-07-17 | 2018-07-17 03:46:13 | 9 | 10225734aa9a | 3.534277 | 0 | 0 | 0 | Keeping track of your Amazon inventory might sound like a chore, and a very difficult chore at that. It usually entails poring over… | 5 | Why You Should Automate Your Inventory Forecasting — Part 1
Keeping track of your Amazon inventory might sound like a chore, and a very difficult chore at that. It usually entails poring over spreadsheets upon spreadsheets on your laptop and making harried calls to your suppliers. It could be stressful and time-consuming, but ultimately, it’s essential to your business.
Inventory management keeps your warehouse organized, allows you to manage your time efficiently, and helps you plan better. With that said, good inventory management also helps you take in more revenue and get more loyal customers.
Inventory management is a full-time effort for every business owners out there. It’s part of the process of managing your ecommerce business. It eats up a lot of time and energy, so it’s quite prone to problems too.
There are a lot of problems that Amazon sellers could face when managing their inventory. There could be missing orders, backlog orders, or unpaid orders.
Don’t even let us get started on stockouts. Stockouts are one of the major problems of Amazon sellers everywhere. Inventory stockouts affect your Amazon ad campaign and gets you behind on competition.
If you’re a constantly stressed Amazon seller, this article is for you. Find out how to solve your inventory problems with an automated solution you deserve.
Why are stockouts bad for your business?
Sold out all inventory stocks? Bad for business
Stockouts cost businesses a lot. They aren’t that obvious, but they happen. Aside from losing a lot of productivity, potential sales, and customers, there are several other reasons why stockouts are bad for your business. Here are some of the ways how stockouts hurt your Amazon sales:
Your customers become unhappy
Unhappy customers are even worse than lost potential customers. Why? Most of the people in this category could have been loyal customers to your business, and grew dissatisfied with your stockout/s. When they move their business somewhere else, you not only lose potential sales, but also repeat sales.
You incur greater warehouse and freight costs
When you have stockouts, the warehouse fees don’t wait for you. You’ll keep paying them even though you’re still waiting for your supplies to arrive. Speaking of arriving deliveries, you’re also paying for increased freight costs for expedited deliveries, ordered during a time of panic caused by stockouts.
You overstock as a consequence
During the panic period that stockouts cause, most sellers start to stock up in excessive amounts in order to anticipate the increase in demand. And then, after the smoke clears up, they’re left with too much stocks stuck in their warehouses with no one to sell to. Overstocking leaves you with a variety of issues like increased warehouse costs, shifting demand, and warehouse clutter.
These are the not so obvious costs of stockouts and overstocking. You may not notice it, but these problems actually hurt your business big time.
You might be scratching your head wondering how to solve this problem. But there is one solution most entrepreneurs use.
Here’s where inventory forecasting comes in.
What is inventory forecasting?
Inventory forecasting can help you save from bad situations.
Inventory forecasting is a strategy based on the Goldilocks Principle. The Goldilocks Principle refers to making sure you’re neither overstocked nor understocked.
When you’re neither overstocked or stocked out, this is called being in the “Goldilocks zone”. The Goldilock principles is all about striking the perfect balance in inventory management and winning at the same time.
This is an ideal situation for all Amazon sellers, considering that it gives you your much-deserved peace of mind and organization. Inventory forecasting is the number one tool to you get to the Goldilocks zone, or so these experts say.
Here’s how inventory forecasting works. Inventory forecasting involves choosing the best forecasting technique. After you’ve selected the most apt forecasting technique for you, it’s an array of calculations and spreadsheets from then on.
Oh, and did we mention it also involves formulas you need to use?
While inventory forecasting is a necessity for your Amazon business, that might not be the case for many very busy Amazon sellers. With things to consider like marketing, customer service, and product research or production to deal with, inventory management and forecasting takes up a lot of time and energy.
Plus, it’s also quite boring.
This is where automated inventory forecasting comes in.
Automated inventory forecasting uses software that automatically calculates demand for your products and the amount of goods you have to stock. It’s a software that easily integrates to your Amazon shop and then takes all important data.
This comes handy for Amazon sellers especially when your product is in demand season. It helps you prepare for sky-high demands during holidays and other peak seasons. Aside from that, it helps you avoid overstocking because it gives you the exact time when and how much you need to stock up.
What’s the deal with automated inventory forecasting? Is it something that you should invest in?
At AiHello we think so. And we will explain that in subsequent posts about our one touch inventory forecasting system. Stay tuned…
| Why You Should Automate Your Inventory Forecasting — Part 1 | 0 | why-you-should-automate-your-inventory-forecasting-part-1-10225734aa9a | 2018-07-17 | 2018-07-17 03:46:13 | https://medium.com/s/story/why-you-should-automate-your-inventory-forecasting-part-1-10225734aa9a | false | 835 | null | null | null | null | null | null | null | null | null | Ecommerce | ecommerce | Ecommerce | 46,740 | AiHello | Ai Intelligence for all your ecommerce needs | 8fff111612c5 | AiHellos | 6 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-10 | 2018-08-10 11:02:28 | 2018-08-10 | 2018-08-10 11:35:14 | 1 | false | pt | 2018-08-10 | 2018-08-10 11:54:01 | 0 | 1022f56e5af7 | 1.909434 | 0 | 0 | 0 | Se você é um cientista de dados ou programador, ou mesmo um entusiasta por tecnologia, as chances são de que você já tenha se enamorado… | 3 |
Inteligência Artificial Geral é pra quem tem coragem
Se você é um cientista de dados ou programador, ou mesmo um entusiasta por tecnologia, as chances são de que você já tenha se enamorado pela ideia da singularidade tecnológica (o momento em que os computadores dotados de inteligência artificial terão capacidade superior à dos humanos) ou mesmo para um passo anterior a ela, a chamada Inteligência Artificial Geral – AGI (do inglês Artificial General Intelligence)
Se você trabalha com tecnologia (não apenas a tem como hobby) as chances também são de que você já tenha percebido que falar de AGI não empolga empregadores, chefes, recrutadores, etc. Dependendo de onde você esteja, quase percebe seu interlocutor fazendo sinal de “fale baixo” ou “o cara tá viajando”.
Isso é totalmente compreensível. Embora os algoritmos “estado-da-arte” tenham alcançado resultados acima da capacidade humana em muitas áreas (sim, Alpha Go é o clássico exemplo, mas existem outros) a verdade é que estamos tão distantes de um algoritmo que possa ser considerado AGI quanto estávamos de mandar o homem à lua há um século atrás.
Mas veja, a comparação aqui é proposital. Antes dos EUA e da Rússia terem sido capazes de enviar o primeiro satélite em órbita, falar em enviar o homem à lua parecia insanidade. Na verdade, o termo criado para descrever essas pessoas, você deve conhecer, foi “lunático”. Isso por si demonstra o quanto as mentes avançadas daquele tempo eram julgadas ao expressarem seus objetivos. Sim, porque a corrida espacial criou inovações tecnológicas incríveis, as quais moldaram nosso mundo de hoje.
E apesar do esforço hercúleo, e aparentemente impossível, hoje sabemos, o objetivo era possível. Ainda assim, atingí-lo envolveu algo importante: coragem.
As mentes daquela época capazes de fazer acontecer sem dúvida alguma enfrentaram o medo do fracasso. Da vergonha, humilhação, dos olhares tortos. Imagine a coragem do primeiro a afirmar ao seu governo que sim, poderia reservar bilhões e mobilizar toda a máquina estatal porque sim, seriam capazes de alcançar o objetivo.
Se você deseja investigar, pesquisar, desenvolver AGI, você precisará de coragem. Não apenas isso, claro. Quando se mira à lua, uma organização impecável, uma determinação inabalável, a humildade para trabalhar em equipe e colaborar com pessoas por vezes mais inteligentes e capazes que você é o mínimo. Mas muita gente brilhante sucumbe ao medo.
Uma AGI, a despeito do famigerado medo “Skynet” tem muito mais chances de nos ajudar em objetivos muito mais nobres (a cura para o câncer, a produção de alimentos e sua distribuição, eliminando a fome, etc), além de potencial para revolucionar o mundo, as relações de trabalho, a produção, entre muitas outras coisas.
A próxima corrida já começou. Não tenha medo de se lançar a ela.
| Inteligência Artificial Geral é pra quem tem coragem | 0 | inteligência-artificial-geral-é-pra-quem-tem-coragem-1022f56e5af7 | 2018-08-10 | 2018-08-10 11:54:01 | https://medium.com/s/story/inteligência-artificial-geral-é-pra-quem-tem-coragem-1022f56e5af7 | false | 453 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Paulo Since | Programmer, web designer, speech therapist, electrician, addicted to technology, totally family and passionate by artificial intelligence | c6fdb8afe954 | paulosince | 16 | 42 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-30 | 2018-08-30 20:57:28 | 2018-08-30 | 2018-08-30 20:59:35 | 3 | false | en | 2018-08-30 | 2018-08-30 20:59:35 | 1 | 10230b32daf3 | 2.648113 | 1 | 0 | 0 | by Clive Higgins, COO | 5 | Innovation Management: The Value of Seeing What You Have
by Clive Higgins, COO
If your job is to get your company, team, or community to innovate, you know how organizational forces can make it hard to even try something new. Visualizing the resources available is an effective first step in overcoming some of those organizational forces. Simply being able to see, and show, what you have allows you to make a compelling case for marshaling resources and even spark some initial interactions in that direction.
The Challenge: Moving Heaven and Earth
Within your organization, processes and structure already exist for doing things, and these can sometimes make it hard to try new things. People lose sight of each other, of what each other is doing, and how they might have worked together. The thought of trying new approaches is painful and sometimes demoralizing. As a result, successful teams may even stagnate.
So innovation managers, research development professionals, and team leaders of all kinds are forced into a battle with organizational inertia. No one can move what feels like a large celestial body or rearrange a solar system.
The Solution: Returning to Primordial Soup
Despite these impeding forces, there are ways that you can visualize the component parts of your teams and the relationships between them, as if the forces didn’t exist. By returning to your community’s primordial soup, you can reimagine and explain how people and their work might collide and recombine to create new things.
Instead of looking at the data you have in lists or tables, network visualizations can uncover relationships you never knew existed. Just seeing the resources available outside of their current structure sets your mind and your audience’s minds free to imagine what could be. You can form cogent notions for why new teams should be formed, where untapped ideas might live, and how novel approaches can be pursued. In fact, they almost pop out of the visualization at you.
An Example in Medical Research
The Osher Center for Integrative Medicine, a collaboration between Brigham & Women’s Hospital and Harvard Medical School, strives to be a “Center without Walls.” Facilitating collaborations among its many researchers and clinical practitioners is critical to its mission of enhancing human health, resilience, and quality of life.
Osher decided to feature on its website maps of connections between individuals and across institutions in both research and clinical practice.
Just by visualizing their community of researchers and their affiliations, Osher revealed the myriad of collaborations happening across disciplines and institutions.
But they didn’t stop there. They added the researchers’ work product, their publications, to the data.
This illuminated more untapped value. It showed coauthorship, which indicated where in the network map, i.e. who, was a hub for research. They saw the thought leaders in their network and with whom they are already collaborating.
Cross-referencing that with specializations, Osher could see who in their community had already crossed domains and who their collaborators are.
One of my favorite sayings is: “If you want something different to happen, you have to do things differently.” Too many of us look at this kind of information in lists or tables. Get modern and you will see new things. It starts with the same data as org charts and personnel directories, but it’s represented differently to expose thought leadership and collaboration.
| Innovation Management: The Value of Seeing What You Have | 1 | innovation-management-the-value-of-seeing-what-you-have-10230b32daf3 | 2018-08-30 | 2018-08-30 20:59:36 | https://medium.com/s/story/innovation-management-the-value-of-seeing-what-you-have-10230b32daf3 | false | 556 | null | null | null | null | null | null | null | null | null | Healthcare | healthcare | Healthcare | 59,511 | Exaptive | Our mission is more data-driven innovation, and we believe interoperability, modularity, and community make them happen. | b64a2f7d224a | exaptive | 9 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 74d3d7d95404 | 2017-09-10 | 2017-09-10 08:59:46 | 2017-09-10 | 2017-09-10 09:03:42 | 1 | false | en | 2017-09-10 | 2017-09-10 09:03:42 | 16 | 1025b28e0922 | 3.660377 | 1 | 0 | 0 | Vladimir Putin on AI; the ethics of human augmentation; how to regulate AI; robots tax could become a reality in San Francisco; and more! | 5 |
This week — Vladimir Putin on AI; the ethics of human augmentation; how to regulate AI; robots tax could become a reality in San Francisco; on gene therapies; and more!
H+ Weekly is a free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. Subscribe now!
More than a human
The Ethics of Experimentation: Ethical Cybernetic Enhancements
Here’s a transcript from Alex Pearlman’s lecture at King’s College in London, where she argues that members of the grinder subculture who experiment on themselves with cybernetic augmentations with the goal of becoming cyborgs are performing ethically permissible, non-therapeutic enhancements that should be tolerated and even embraced by medical and regulatory institutions.
Artificial Intelligence
Vladimir Putin: Country That Leads in AI Development “Will be the Ruler of the World”
Vladimir Putin, the President of the Russian Federation, spoke on Friday on the potential of leaders in AI to use the advanced technology to rule over the world. Soon after Elon Musk tweeted that AI race could cause WWIII.
How to Regulate Artificial Intelligence
In this article, Oren Etzion from the Allen Institute for Artificial Intelligence outlines three rules to make AI behave nicely. These rules are: AI must be subject to the full gamut of laws that apply to its human operator, AI must clearly disclose that it is not human and AI cannot retain or disclose confidential information without explicit approval from the source of that information.
Where will AGI come from?
Here are slides from Andrej Karpathy’s talk from YConf where he explores the current state of AI and tries to find out where will artificial general intelligence (AGI) come from.
We Can Create a Great Future With AI If We Start Planning Now
Physicist Max Tegmark is optimistic about the future of artificial intelligence and its limitless potential. However, he believes people have a limited view of what AI truly is, and that there isn’t enough being done to ensure we’re safe from it.
Noriko Arai — Can a robot pass a university entrance exam?
Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo — without actually understanding a thing. While it’s not matriculating anytime soon, Todai Robot’s success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?
This Is How Google Wants to ‘Humanize’ Artificial Intelligence
In order to make AI more useful, Google launched PAIR (short for People plus AI Research) project, which goal is to find more compelling uses of AI with “focus on the ‘human side’”. The initiative also hopes to discover ways to “ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI.”. PAIR would also create AI tools and guidelines for developers that would make it easier to build AI-powered software that’s easier of troubleshooting if something goes wrong.
Robotics
Bill Gates’ Plan to Tax Robots Could Become a Reality in San Francisco
The idea behind this tax seems noble. Tax the robots that take jobs and fund basic income with it. On the other hand, the robot tax could affect the research in robotics or make the goods produced with them more expensive.
China’s blueprint to crush the US robotics industry
Not so long ago, Chinese robots were just copies of those from US. Now, China is the biggest player in robotics. All thanks to a massive push by the Chinese government to be a world leader in a number of high-tech industries, such as medical devices, aerospace equipment and robotics.
The World’s First Drone Equipped with Robotic Arms
Ready to fly over and steal your stuff.
How Flytrex launched the ‘world’s first’ urban autonomous drone delivery system
I don’t know if they are the “world first”, but if you live in Reykjavik there is a chance that your next takeaway will be delivered by a drone.
These Dancing Robots Are Breaking Records
Dance, dance, robotic revolution!
Biotechnology
Has the Era of Gene Therapy Finally Arrived?
The FDA just approved the first gene therapy for sale, but such therapies remain far from fulfilling their early promise. We still don’t know that much what side effects or unintended mutations it can cause. On top of that, the price tag for gene therapy is high. That one therapy approved by FDA, Kymriah, costs $475,000.
Lab-Grown Brain Balls Are Starting to Look More Lifelike
Sergiu Paşca, a neuroscientist at Stanford University, alongside with other researchers is growing little balls of human brain tissue, about four millimetres in diameter, from stem cells in the lab. With prompting from the right chemicals, these cultures grow into neurons and other cell types that organize themselves over weeks and months into structures that resemble actual regions of the human brain, at least to some degree. These mini-brains are then used in experiments which previously would need a full-grown human brain to conduct.
Thanks for reading this far! If you got value out of this article, it would mean a lot to me if you would click the heart-icon just below.
Every week I prepare a new issue of H+ Weekly where I share with you the most interesting news, articles and links about robotics, artificial intelligence and futuristic technologies.
If you liked it and you’d like to receive every issue directly into your inbox, just sign in to the H+ Weekly newsletter.
| H+ Weekly — Issue #118 | 5 | h-weekly-issue-118-1025b28e0922 | 2018-01-29 | 2018-01-29 12:01:07 | https://medium.com/s/story/h-weekly-issue-118-1025b28e0922 | false | 917 | A free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. | null | hplusweekly | null | H+ Weekly | h-weekly | TECHNOLOGY,TRANSHUMANISM,ARTIFICIAL INTELLIGENCE | hplusweekly | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Conrad Gray | Engineer, Entrepreneur, Inventor | http://conradthegray.com | e60a556ba1d4 | conradthegray | 633 | 102 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-02-14 | 2018-02-14 22:50:59 | 2018-02-16 | 2018-02-16 19:09:34 | 8 | false | es | 2018-02-16 | 2018-02-16 19:09:34 | 4 | 102643b16ba0 | 9.405031 | 2 | 0 | 0 | En su versión más simple, un sistema de repartición es aquel donde un proveedor busca entregar un producto o servicio a un consumidor. Sin… | 5 | Optimización de repartos y distintos algoritmos de agrupación
En su versión más simple, un sistema de repartición es aquel donde un proveedor busca entregar un producto o servicio a un consumidor. Sin embargo, a lo largo de la historia, dado el crecimiento poblacional y la emancipación de la sociedad del sector industrial, este sistema se ha vuelto fundamental y complejo para la sociedad. ¿Podrías imaginar un supermercado sin un flujo constante de suministros?
Se especula que incluso los más grandes centros comerciales tienen mercancía para tan solo 1 semana. Ahora imagina que eres el proveedor de refrescos de las más de 5,000 tiendas de abarrotes y restaurantes que hay en la ciudad de México. Las implicaciones tanto sociales como económicas para ti y tus clientes son enormes. Entonces, optimizar tu sistema de repartición se vuelve interesante.
El sistema de repartición se vuelve especialmente complejo cuando entran en juego variables como estacionalidad, volatilidad en los pedidos, aparición de nuevos clientes, número de operarios y jornadas laborales, etc. En este artículo se presénta la primera parte de una solución al problema de optimización en el sistema de repartos de una empresa cualquiera utilizando ciencia de datos y algoritmos de agrupación.
Antecedentes
El problema inicia con una compañía X cuyo giro es la repartición de productos al consumidor final. Esta compañía cuenta con más de 5,000 clientes en la ciudad de México y debe repartir a cada uno de ellos a lo largo de la semana; La compañía cuenta con un número dado de camiones de repartición y de operarios para llevar acabo la tarea. Las preguntas que atormentan a la compañía son:
En una ciudad tan transitada como esta, la ruta de cada camión es esencial para completar las entregas a tiempo. ¿Cómo elaborar las rutas con clientes lo más cercanos posible? Un cliente muy lejano con respecto a los demás conlleva fuertes implicaciones de tiempo y dinero (aumento en el consumo de gasolina, pago de horas extra a operarios, etc.).
Dada las variaciones en los pedidos, ¿Cómo logramos una carga de trabajo justa para los operarios? Existe la posibilidad de que alguno de los clientes realice pedidos de 10 veces la cantidad promedio y es posible, de igual manera, que algún cliente requiera tan solo la mínima cantidad de producto — No solo es importante la cantidad de clientes, es necesario considerar sus necesidades.
¿Cómo optimizar el número de trabajadores por ruta? El exceso de trabajadores implica perdidas económicas para la compañía, mientras que trabajadores insuficientes resulta una carga de trabajo desproporcional.
Definición del problema.
Dado que nos enfrentamos a un problema de optimización es importante definir la función a optimizar; En este caso queremos que el tiempo de repartición sea el menor siempre y cuando se cumplan las condiciones estipuladas en la definición del problema. A estas condiciones las llamaremos restricciones del problema de optimización:
Los operarios no deben tener una carga de trabajo excesiva dado que esto reduciría los niveles de satisfacción y, por consiguiente, la productividad (máximo 8 horas de trabajo al día)
Se debe repartir a todos y cada uno de los clientes.
La velocidad promedio de los operarios en caso de realizar reparticiones caminando es de 5Km/hr.
La distancia máxima a caminar por un operario es de 1km por cliente, con el fin de no comprometer la integridad del trabajador y la autenticidad de la parada.
Los operarios cuentan con una pequeña grúa que les permite cargar un máximo de 8 cajas de producto por viaje de la parada del camión a un cliente.
Exploración de los Datos
Cada empresa es diferente y cada solución debe adaptarse a las condiciones particulares de cada problema. Por esto, como en cualquier problema que involucre ciencia de datos, comenzaremos explorando el registro de las operaciones anteriores de la empresa en cuestión. Para esta sección más de 2 años de registro fueron procesados y analizados detalladamente. A continuación, se presenta un breve resumen de los hallazgos.
De acuerdo al historial de la empresa, las paradas pueden variar en relación a la distancia con los clientes; Dado que, anteriormente se buscaba realizar la menor cantidad de paradas, con la mayor cantidad de clientes por parada, pudimos observar rutas con menos de 5 paradas en donde los operarios caminaban hasta un kilómetro y medio para repartir a un cliente, y rutas de más de 10 paradas donde cada operario caminaba en promedio 200 metros.
Los operarios tienden a caminar en promedio 600 metros, esto puede deberse a que los camiones de repartición son grandes y difíciles de estacionar en zonas concurridas, por lo que los operarios buscan puntos de encuentro desde los cuales repartir.
La desviación estándar de cajas cargadas por operario en un día promedio es gigantesca — Se tienen operarios con una carga hasta 3 veces mayor a la de sus compañeros.
25% de los operarios trabajan más de 11 horas al día y existen operarios que trabajan 6 horas o menos.
Las variables más impactantes en el tiempo de repartición son el número de paradas y el número de operarios por vehículo.
Claramente, es necesario un ajuste en el sistema de repartos.
Solución al problema
La solución al problema consta de dos etapas, la primera consiste en la estructuración de rutas para cada vehículo con el fin de obtener el mínimo número de paradas con las distancias más cortas a los clientes. La segundada etapa consiste en optimizar el número de operarios por cada ruta con el fin de reducir costos operacionales y cumplir con los tiempos de repartición deseados. A continuación, se describirá la primera etapa de la solución propuesta.
En la primera etapa, el problema se resume a encontrar una buena agrupación de los clientes y encontrar un punto estratégico para cada conjunto de clientes — un problema de agrupación.
En la literatura existen muchos algoritmos de agrupación capaces de relacionar puntos entre sí de la mejor manera, cada uno con ventajas y desventajas por lo que es vital un buen entendimiento de cada algoritmo. La siguiente imagen detalla 9 algoritmos de agrupación diferentes de la librería scikit-learn (Python) empleados en 6 diferentes sets de datos:
Figura 1.- Comparación de algoritmos de agrupación (http://scikit-learn.org)
En la figura 1. Observamos el comportamiento de los diferentes algoritmos ante diferentes puntos donde cada color hace referencia a una agrupación distinta y el número en la esquina inferior derecha de cada ejemplo es el tiempo que le tomo a cada algoritmo generar las agrupaciones. De la misma forma, el nombre en la parte superior es el nombre de el algoritmo.
Algunos de estos algoritmos buscan separar los puntos en un número predeterminado de agrupaciones, es decir realizan n agrupaciones donde n es dado por el usuario. Como se explica en la definición del problema, buscamos encontrar el número óptimo de paradas. Por esta razón se descartaron 6 de esos métodos pues es precisamente el número de agrupaciones lo que queremos optimizar.
Esto nos deja con tres posibles algoritmos:
Dbscan.
Propagación por Afinidad.
Mean Shift.
Dbscan
Dbscan es un algoritmo utilizado principalmente para detección de anomalías. Por sus siglas en ingles Dbscan es un ‘agrupamiento espacial basado en densidad de aplicaciones con ruido’ y como su nombre lo dice, es un algoritmo basado en densidades — Un número de puntos dentro de una esfera n dimensional con radio ɛ. Una agrupación es aquel conjunto máximo de puntos interconectados por densidad en un espacio dado.
En este problema estamos asumiendo que todos los datos son auténticos y el clasificar a algún dato como ruido significaría eliminar a alguno de los clientes. Otro punto en contra de Dbscan es que este realiza las agrupaciones en base a densidades promedio, por lo que, si la distancia entre los puntos es menor a la densidad media, Dbscan seguirá agrupando puntos continuamente. Esto se aprecia mejor en la siguiente figura:
Figura 2.- Dbscan
¿Qué significa esto para nuestro problema en cuestión?, Dbscan podría, en teoría, generar paradas que atiendan a una cantidad infinita de clientes, la cual puede llegar a ser infinitamente larga si los clientes no rebasan una distancia promedio entre ellos. A continuación, se muestra el resultado de una ruta seleccionada aleatoria mente donde los puntos representan la latitud y longitud de una serie de clientes y las X representan las paradas sugeridas por Dbscan:
Figura 3.- Resultados obtenidos de Dbscan
En la Figura 3 cada color representa un grupo de clientes para una parada señalada con la ‘x’. Observamos que los puntos de color rosa claro están distribuidos de manera no uniforme, esto significa que han sido clasificados como anomalías dado que no pertenecen a ninguna agrupación. Por otro lado, podemos observar que el conjunto color rojo tiene una distribución no homogénea por lo que, si consideramos la ‘x’ rodeada por los puntos como la parada ideal, las distancias a caminar para los operarios serian excesivas al igual que el número de clientes por parada.
Propagación por Afinidad
El algoritmo de propagación por afinidad busca agrupar los puntos optimizando una función de afinidad s donde un punto dado xi pertenece a la misma agrupación que xj y no de xk si y solo si s(xi, xj) > s(xi, xk) :
Figura 4.- Propagación por afinidad.
Este algoritmo identifica un centroide y relaciona los puntos cercanos continuamente hasta maximizar una métrica de afinidad dada.
La desventaja de este algoritmo para el problema en cuestión es que no considera distancias entre los puntos si no afinidad, por lo que pudiesen resultar distancias que comprometan la carga de trabajo de los operarios y a largo plazo su productividad — dado el caso de que dos puntos tengan ‘afinidad’, el algoritmo podría agruparlos en una misma parada a pesar de que estén a 5km de distancia.
A continuación, se muestra el resultado de utilizar el algoritmo propagación por afinidad en el ejemplo seleccionado:
Figura 5.- Resultados obtenidos de Propagación por afinidad.
Aparentemente es una configuración funcional, sin embargo, en la parte inferior podemos observar una agrupación en color rojo en la que los puntos de los extremos se encuentran a casi 2 kilómetro y medio de distancia lo cual implica que para satisfacer a los clientes en los extremos suponiendo un punto de parada a la mitad de los puntos, un operario debería caminar más de dos kilómetros considerando que su trayecto es ida y vuelta.
Mean Shift
Mean-Shift es un algoritmo que, para cada punto, genera una asociación con respecto a una función probabilista de densidad. De esta manera, en regiones con alta densidad en un espacio dado se generan agrupaciones delimitadas por una región llamada ventana o ancho de banda donde se calcula la media de los datos contenidos en dicha región. Posteriormente, se desplaza el centro de la ventana a la media calculada anteriormente y se repite el proceso.
Dado que Mean-Shift no requiere un parámetro de agrupaciones predefinido por el usuario, se utiliza un kernel de estimación de densidad. A esta técnica se le conoce como la ‘Técnica de la ventana de Parzen”. Donde dado un kernel K y una ventana o ancho de banda h, el Kernel de estimación de densidad de un conjunto de puntos de d dimensiones es dado por:
Por consiguiente, el ancho de banda o ventana tiene una implicación física en la formación de agrupaciones, que se traduce como un límite a la distancia que los operarios deben caminar.
Este algoritmo resuelve la prerrogativa planteada en el inicio y no abusa de la carga de trabajo para cada uno de los operarios.
Aplicado a el ejemplo genérico obtenemos:
Figura 7.- Resultados obtenidos de Mean Shift
Analizando los resultados mostrados en la Figura 7, obtenemos que la distancia máxima para caminar de los operarios es de 960mt ida y vuelta, lo cual cumple con las restricciones establecidas.
Resultados
Dado el problema de optimización del sistema de reparto, particularmente la sección de agrupación de clientes y paradas, el algoritmo de agrupación Mean-Shift presentó los mejores resultados dada las restricciones establecidas en el planteamiento del problema. Utilizando Mean-Shift fuimos capaces de encontrar agrupaciones de clientes a los cuales repartir desde una parada representada por el centroide de dicha agrupación satisfaciendo los requerimientos de tiempos y de distancias a caminar por cada uno de los operarios.
Es importante resaltar que esta es solo la solución a uno de los problemas estipulados al inicio de este articulo y representa tan solo una pequeña parte de todo el trabajo que se realizó para este proyecto. Los trabajos consiguientes serán presentados en artículos futuros.
Algoritmos de Agrupación, Pros y contras
En esta sección se presenta un breve resumen de los algoritmos revisados en este artículo:
Dbscan
Pros:
Efectivo para detección de anomalías.
Implementación sencilla.
No paramétrico
Contras:
No es escalable a aplicaciones con restricciones de distancia.
No funciona con datos con amplias diferencias de densidad.
Affinity Propagation
Pros:
Buen desempeño en identificación de relaciones y agrupaciones cualitativas.
Eficaz en aplicaciones de imagenología.
No paramétrico.
Contras:
No es escalable a aplicaciones con restricciones de distancia.
Métrica de afinidad puede no ser pertinente.
Mean Shift
Pros:
Permite parametrizar distancias en agrupaciones.
No asume una distribución en los datos.
No paramétrico en cuanto a número de agrupaciones.
Contras:
Es complicado seleccionar un parámetro optimo para la ventana.
Una selección inapropiada de la ventana puede resultar en agrupaciones pobres.
Referencias y proyectos relacionados
https://saravananthirumuruganathan.wordpress.com
https://towardsdatascience.com/a-brief-overview-of-outlier-detection-techniques-1e0b2c19e561
http://scikit-learn.org/
| Optimización de repartos y distintos algoritmos de agrupación | 30 | optimización-de-repartos-y-distintos-algoritmos-de-agrupación-102643b16ba0 | 2018-05-31 | 2018-05-31 13:14:30 | https://medium.com/s/story/optimización-de-repartos-y-distintos-algoritmos-de-agrupación-102643b16ba0 | false | 2,192 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Raul Margarito Maldonado | Entrepreneur, Data Scientist & mechatronic engineer. | a40880409db8 | raulmargaritomaldonado | 23 | 31 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-14 | 2018-05-14 01:30:57 | 2018-05-14 | 2018-05-14 02:47:53 | 0 | false | en | 2018-05-14 | 2018-05-14 02:47:53 | 1 | 1026f43cec02 | 1.830189 | 0 | 0 | 0 | Last week was… tough. | 4 | The Bigger Picture
Last week was… tough.
After working solo for my hackathon project, I had forgotten how hard it is to work in a group. There’s merge conflicts, people getting mad about merge conflicts (“why did you delete all my stuff?” “it wasn’t me, it was git!”), having to read and understand other people’s code, and not always getting to work on the coolest parts of the app.
My capstone project at the Grace Hopper Program of Fullstack Academy involves making a lighter, more intuitive version of Habitica. As we only had a little over two weeks to work on the project, we had to set realistic goals for what we could accomplish. In the end, our plan for the project was nice, but not impressive.
To beef things up a bit, one of our group members suggested implementing machine learning in our app. We could have the users take a quiz upon signing up, run it through IBM Watson, and get information about their personalities. We could then use those personality profiles to suggest to users what habits they should take up.
Machine learning sounded fun, and it would certainly make our app more interesting. But we needed to build the infrastructure before we could add that layer on top. It was decided that one person would work on IBM Watson while the others built the infrastructure of the app, and I was not the lucky person.
There are moments where I’m sitting in front of my computer where I feel like my code is my friend. I created it. It understands me. We have a great relationship. But this relationship was *sob* shattered when I was given those 5 words that I’m sure many developers dread: “your work isn’t interesting enough.”
What.
I know I’m not doing machine learning... But.
What.
To some extent I understand where this was coming from. Much of the code I wrote didn’t necessarily show up on the frontend. I once spent an entire day implementing a functionality that would allow users to uncheck a habit (in case they had mistakenly checked it), with nothing to show for it except a visual change in the progress bar (a progress bar, which, of course had bugs in it and didn’t render properly). To an outsider, it can seem that nothing happened during that day. But, friend, I’m telling you, I put a lot of work into making sure you were covered in case you checked off a habit you weren’t supposed to! (See how lame that sounds?)
It can be hard working on a project without seeing how your work affects the big picture, and even harder when others can’t see it too. I find solace in my code, though, and solace in knowing that my work has contributed to making what I hope will be an amazing final project. It’s time to focus on the bigger picture.
| The Bigger Picture | 0 | the-bigger-picture-1026f43cec02 | 2018-05-14 | 2018-05-14 02:47:54 | https://medium.com/s/story/the-bigger-picture-1026f43cec02 | false | 485 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Priya Bansal | null | 6e96bd21cec8 | puppylover369 | 7 | 9 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-05 | 2018-06-05 16:22:16 | 2018-06-05 | 2018-06-05 16:40:22 | 11 | false | en | 2018-06-05 | 2018-06-05 16:46:59 | 10 | 1027bdf8d0f6 | 2.183019 | 5 | 0 | 0 | SINGAPORE | 5 | Upcoming Events — Meet Team Verv!
Team Verv, London
SINGAPORE
June 7: https://www.meetup.com/Ethereum-Singapore/events/251095652/
Event: Verv hosted event, Ethereum Singapore
Speaking: Head of Equity Finance / Token Allocation, Alexander Nicholson
LONDON
June 7: https://www.energylivenews.com/tc-events/energy-live-future/e
Event: Energy Future Live
Speaking: Business Development Manager, Bill Goldie
LONDON
June 7: https://www.hardwarepioneers.com/2018-events/iot-and-connected-hardware-showcase-2018
Event: IoT and Connected Hardware Showcase 2018
Exhibiting: Team Verv
LONDON
June 11–12: https://cogx.co/
Event: The Festival of All Things AI, Blockchain and Emerging Technologies
Attending: COO, Maria McKavanagh
Exhibiting: Team Verv
NEW YORK
June 11–12: https://www.bcisummit.com/
Event: Bitcoin & Cryptocurrency investment summit
Speaking: CEO, Peter Davies
LONDON
June 13: https://mjac.io/
Event: CryptoCompare Mjac Blockchain Summit
Speaking: COO, Maria McKavanagh
Exhibiting: Team Verv
BANGKOK
June 18: http://www.swpark.or.th/index.php?option=com_seminar&task=3&cid=323&Itemid=129
Event: Technology Investment Conference 2018 “Investments that Spark Science and Technology Innovation”
Speaking: Head of Product Blockchain, Yi Jean Chow
VIENNA
JUNE 18–19: https://www.engerati.com/meets/transactive-energy-blockchain/session/driving-p2p-energy-trading-using-ai-technology
Event: Engerati
Type: Presentation on the future of transactive energy
Time & date of Maria’s presentation: Monday 18th June, 16:45
Speaking: COO Maria McKavanagh
VIENNA
JUNE 19–21: http://www.electrify-europe.com/en_GB/index.html
Event: Electrify Europe
Type: Panel discussion on the future of the energy industry
Speaking: COO Maria McKavanagh
Time & date of panel talk: Tuesday 19th June, 16:00–17:30
SILICON VALLEY
JUNE 24–29: http://www.freetheelectron.com/
Event: Free Electrons Global Energy Programme — accelerator
Attending: CEO, Peter Davies and Head of Product Blockchain, Yi Jean Chow
| Upcoming Events — Meet Team Verv! | 201 | upcoming-events-meet-team-verv-1027bdf8d0f6 | 2018-06-17 | 2018-06-17 03:56:00 | https://medium.com/s/story/upcoming-events-meet-team-verv-1027bdf8d0f6 | false | 234 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Verv — VLUX Token Official | Creating a new energy marketplace that’s powered by data: P2P energy trading and data exchange enabled by the VLUX Token. Visit: https://vlux.io/ | 57336981ede8 | VLUX | 107 | 57 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1c2fd833c3f2 | 2018-05-21 | 2018-05-21 12:14:53 | 2018-05-21 | 2018-05-21 12:18:56 | 2 | false | en | 2018-05-21 | 2018-05-21 12:18:56 | 6 | 102b5073a875 | 2.658805 | 0 | 0 | 0 | We live in the present world, where everyone is dependent on technology in other words we are addicted with the technology. From children’s… | 5 | Kepler Technology - The Future Of AI Robotics
We live in the present world, where everyone is dependent on technology in other words we are addicted with the technology. From children’s to elders everyone use technology to accomplish their work. Technology makes our life meaningful. Blockchain technology is the major technological invocation in the history of mankind. And Robotics is the next technological innovative branch which will change the way we live. Robots are now increasingly becoming the integral parts of our everyday lives and are on their way to intervene with the overall process of human innovation. From Artificial Intelligence assistants to self-driving cars and even human-likes (Sophia, the first robot to be awarded citizenship, Saudi Arabia), advances in technology open the horizon for electrifying future.
What is Kepler technologies ?
KEPLER technologies want to shape the future of humanity together with us, with our direct participation and under our management. They created Keplertek, the first AI and robotics ecosystem powered by the blockchain. Blockchain is a solution that has the potential to transform the industry and is the foundation of Keplertek. Blockchain-powered smart contracts and use of cryptocurrency we provide more transparency.
Features of Keplertek :
* Lowering the cost — Kepler technology eliminates the necessity of middle men like government agencies and financial institutions while making an investment. Kepler uses blockchain technology to credit the investment directly to the startups with a fraction of a cost.
* Efficiency — With the help of distributed ledger technology, blockchain investors can track how their cryptocurrency is being used by startups. And this information cannot be erased. Which improves the efficiency of investment use.
* Transparency : If the investment funded startup failed to accomplish predetermined conditions, smart contracts allow investors to receive their investment back or to redirect it towards more deserving startups.
Kepler uses the distributed ledger technology in the Robotics & AI development ecosystem. Investors will have more trust in KEPLER platform projects and consequently, they will be more willing to invest. Kepler uses ERC20 token (KEP) that aims to become the preferred method for investing transparently through the Ethereum blockchain and to access the KEPLER platform.
Kepler Token Information :
Symbol : ( KEP )
Maximum Supply : 100,000,000
KEP Type : ERC20
Price : 1 KEP = 1.25 USD
Kepler’s Vision :
Kepler technology is backed by blockchain technology and smart contracts . No human agents are involved in any transactions thereby guaranteeing 100% fraud protection and 100% reversal of the investments from the startups if they do not meet the predetermined conditions through smart contracts. Kepler provides a platform where inventors and visionaries from all over the globe to showcase their inventions. Investors will be able to diversify their funds by investing in the patents and products through KEP token.
Kepler technology is a vision oriented futuristic project. As Robotics and Artificial Intelligence are the future. As Kepler is providing a great platform for AI startups to make their ideas come true. Kepler is a heaven for visionaries. Kepler technology eliminates the corruption and fraud With the use of blockchain technology. The world is now moving towards cashless society and Kepler helps us to achieve. Robotics is the branch is yet to change the world. AI robots can do work much better than we ever imagined. And Kepler is providing the providing the path towards the future. Be invest in the Kepler project be a part of revolution.
Know more about Kepler here : https://keplertek.org
JoinTelegram : https://t.me/KeplerTechnologiesJ
JoinTwitter : https//twitter.com/KeplerTek
Facebook : https://www.facebook.com/Keplertek/
Bitcointalk : https://bitcointalk.org/index.php?topic=2853182.0
About Me : Abhijeetcg
Bitcointalk link :
https://bitcointalk.org/index.php?action=profile;u=1676105;sa=summary
| Kepler Technology - The Future Of AI Robotics | 0 | kepler-technology-the-future-of-ai-robotics-102b5073a875 | 2018-05-26 | 2018-05-26 12:00:14 | https://medium.com/s/story/kepler-technology-the-future-of-ai-robotics-102b5073a875 | false | 603 | Here we can explore the top rated ICO. So that you can read about them and can make decision to invest. | null | null | null | Explore ICO | explore-ico | ICO,CRYPTOCURRENCY,BLOCKCHAIN TECHNOLOGY,BOUNTY CAMPAIGN,AIRDROP | Abhijeetcg | Blockchain | blockchain | Blockchain | 265,164 | Abhijeetcgy | null | 6bf0828069a6 | abhijeetcgy | 5 | 7 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | fc78dab2b103 | 2018-03-06 | 2018-03-06 15:48:14 | 2018-03-06 | 2018-03-06 15:54:34 | 1 | false | en | 2018-07-06 | 2018-07-06 07:17:05 | 1 | 102c227aff8d | 0.603774 | 0 | 0 | 0 | Title: Performance of the 50 Greatest NBA Players | 3 | Homework #2-Josh Thoo
Title: Performance of the 50 Greatest NBA Players
Conclusions:
The more points an NBA player scores, the more All Stars he will attain.
Between 15000 and 22000 points, no correlation exists between Total Points and All Stars attained and fluctuates.
Generally, NBA players may expect to have 1 All Star for every 2500 points they score.
library(ggplot2)
library(stringr)
page <- read_html(“https://en.wikipedia.org/wiki/50_Greatest_Players_in_NBA_History")
page_tables <- html_table(page, fill = T)
players <- page_tables[[4]]
players <- players[order(players$Pts),]
players2$Pts <- str_replace_all(players2$Pts, “[^[:alnum:]]”, “”)
players2$Pts <- as.numeric(players2$Pts)
qplot(x = players2$Pts, y = players2$’All Star’,
main = “50 Greatest NBA Players”,
xlab = “Points”, ylab = “All Stars won”
)
| Homework #2-Josh Thoo | 0 | homework-2-102c227aff8d | 2018-07-06 | 2018-07-06 07:17:05 | https://medium.com/s/story/homework-2-102c227aff8d | false | 107 | A pilot data science hackathon for high school students in Singapore | null | null | null | Budding Data Scientists | budding-data-scientists | DATA SCIENCE,EDUCATION,HACKATHONS,SOCIAL CAUSE,HIGH SCHOOL | null | Data Science | data-science | Data Science | 33,617 | Josh Thoo | null | e4ccf4af462a | joshthoo | 0 | 0 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-26 | 2018-04-26 03:22:49 | 2018-04-26 | 2018-04-26 05:40:14 | 1 | false | en | 2018-09-19 | 2018-09-19 03:35:32 | 2 | 102d19a8eb7b | 0.690566 | 2 | 0 | 0 | Update: | 5 | Tic Tac Toe with MiniMax AI
Update:
I created the android app for this.
Tac Adventure — Apps on Google Play
Play with bot.Play with friend.play.google.com
Yesterday i was bored as hell. With nothing particular to do, this semester break seemed pretty dull. I was lying in the bed bored and lazy. And out of nowhere i came up with the idea of making game. I decided on Tic Tac Toe and started laying my game plan. Within few hours i would have a playable game in front of me.
History repeats itself. Here i am bored again with no idea what to do. So i decided to share the game i made yesterday along with the code. :)
Tech used: Html5 Canvas, javascript.
complete AI code: https://github.com/Bipinoli/Tic-Tac-Toe
| Tic Tac Toe with MiniMax AI | 70 | tic-tac-toe-with-minimax-ai-102d19a8eb7b | 2018-09-19 | 2018-09-19 03:35:32 | https://medium.com/s/story/tic-tac-toe-with-minimax-ai-102d19a8eb7b | false | 130 | null | null | null | null | null | null | null | null | null | Programming | programming | Programming | 80,554 | Bipin Oli | null | 18b69cc1f7cf | bipinoli90 | 7 | 43 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 3b4c6021990f | 2018-03-18 | 2018-03-18 07:27:22 | 2018-04-08 | 2018-04-08 00:28:58 | 1 | false | en | 2018-04-08 | 2018-04-08 00:28:58 | 8 | 102eb9230f07 | 6.649057 | 1 | 0 | 0 | For simplicity, I’m going to outline some of the the assumptions & opinions underlying my discussions. While speculative & controversial… | 4 |
Assumptions & Common Objections
For simplicity, I’m going to outline some of the the assumptions & opinions underlying my discussions. While speculative & controversial, these opinions have many credible supporters, and there has been much written on these topics.
One thorough option is Superintelligence, by Nick Bostrom.
Eliezer S. Yudkowsky is excellent. Read all his stuff (my favorite ideas in Superintelligence are from him, also read his Quantum Physics Sequence if you’d like to change your view on reality).
I’ll leave you the reader to argue elsewhere about these issues, and just take them as granted here (or feel free to move on).
Primary Assumptions:
We will develop a machine (often called an Artificial General Intelligence or AGI) with intelligence greater than that of a human being.
There are no barriers to creating a machine at least as intelligent as a human
We have been making progress at a swift rate, since the invention of computers.
A general AI equal to that of a human is very likely within the next 200 years, conservatively (assuming civilization survives that long).
There is no reason to believe that such machines cannot then become more intelligent than humans, in both speed, flexibility and general quality.
It is unlikely humanity can consciously restrain itself from developing such technology (when have we ever done this, when it was very valuable?)
An AGI is, at the least, a significant existential risk to humanity.
We might get the stereotypical Skynet
We might get the benign-sounding but disastrous Paper Clip Maximizer.
We might get a well-meaning intelligence, that nevertheless ends up wiping out humans as an externality, much as humans have destroyed countless species (and indigenous human societies) as we have grown.
We might get a benevolent AI, leading to a paradisiacal singularity. But, I do not believe any argument that this is a certainty or even likely. Therefore, the risk is still worth dealing with.
This is very similar to arguments that climate change might a) not be severe, or b) produce benefits. While either of these assumptions might be true… there is no evidence to suggest they are particularly certain, and thus we should still prepare for the big downside, even if only as insurance.
I also tend to find most arguments that AGIs would be beneficent to be lacking in quality, and backward looking. This reminds me of the turkey, who, being a good empiricist is most convinced that the farmer wishes him well the day before Thanksgiving.
AI might destroy humanity more thoroughly than even global nuclear war. Given this risk, it’s well worth it for us to prepare how to reduce, or at least, understand this risk.
This is my primary interest in these posts
Secondary Assumptions:
Machines will outstrip biotechnology until we reach Human-level AI. Biotechnology is unlikely to allow human intelligence to keep pace with machines, or ‘merge’ with them.
This deserves it’s own discussion, but in short: my observations suggest very fast progress (say, in the last 40 years) in machine ‘intelligence’ and vastly slower progress in understanding, let alone improving, biological systems.
Some factors influencing this are:
We have very limited tools for understanding biological system still. Developing news tools is hard, and not that profitable. For example, successful drug companies are more valuable than tool companies. Successful biological tools companies are rare. I experienced this working in Venture Capital for many years, desiring to fund such companies.
Biological systems, let alone the brain, appear exceedingly complex. We are still very far from understanding their dynamics.
The number of well-defined tasks that humans can perform much better than machines is steadily, quickly decreasing, whether it be Go, radiology, poker, driving, investing, chemical synthesis, or even burger-flipping. Even things like production of art, or mathematical proofs are in play.
The cost and time it takes to improve biology is massive compared to information technology. This is in part due to our limited tools, but also due to other factors such as:
Regulation and social dynamics. (E.g. genetically engineered salmon have taken over 25 years since their development to be approved for Canadian markets, and are still banned in the US).
The inherently slow nature of biological systems (e.g. growth + development time of humans is long! You can can’t test anything quickly). Software is inherently much faster.
Machine / human integrations are likely not a panacea. These will be very hard to produce, and rate-limited by all the things that limit biology in general. Unless the biological component provides real added value , I don’t see why they will persist, and the machine side not become dominant.
Common Objections
Below are some commonly made objections to these points or arguments for why we shouldn’t be discussing this now. These all deserve more space than they get there, but this is just not my priority now.
Machines can’t be conscious. Only organic things like us can.
A: I am not concerned with philosophical definitions of consciousness here. The question at hand is an empirical one, as to whether machines can beat humans in all relevant empirical tasks.
Neurons are vastly more complex than their digital representations. We are nowhere close to making machines with the computational power of biological systems.
A: There is sufficient evidence to make such an argument. The more we learn about biology (neurons being just one example), the more it seems like an immensely complex alien nanotechnology that is far beyond our current understanding. That said, our machines (from the wheel up to Alpha Go Zero) seem to be able to handily beat biology in well-defined and relevant tests. So complexity is clearly not a universal defense. The number of tasks at which machines beat humans is growing steadily. I see no evidence that any of the remaining tasks will be insurmountable due to yet to be understood elements of biological complexity, as they do not seem to utilize fundamentally different systems or processes.
There are more important things (related to this) to worry about! What about climate change, nuclear weapons, poverty, or inequality caused by automation obsoleting jobs now?
There are many things to worry about in the world! Oh man it gets me down thinking about them. But this is one I think we should worry about more than we currently are. This is because a) we are not worrying about it much, currently (though Elon & Co. are helping!) b) I believe this is the highest probability cause for the complete extinction of humans (or at least a tie with nuclear weapons).
Global Warming: Yes this is a big deal. But it is highly unlikely to result in the complete extinction of homo sapiens. Also, plenty of other folks are carrying the torch here.
Nuclear Winter: Good point. This is a huge thing to worry about, which we don’t currently worry enough about. I’m not sure how much it would take to truly drive homo sapiens to extinction, but nuclear winter doing so seems at least plausible. But I have little to add on this topic. (Except to throw in quickly that appeals for world leaders to grow sane are laughably naive, and that the best chance is a defense that beats ICBMs).
Concentration of power / machines taking human jobs:
Yes these are all big (and current) issues to deal with. In some ways these are simply the current steps in the process toward AGI birth. They certainly deserve more worry than AGI, currently. But that does not mean we should ignore AGI risks.
That said, these risks are unlikely to totally extinguish humanity. We could recover from them. And there are many, many other people fighting them.
We can both deal with these challenges and prepare for the more distant future.
Human society is already like a huge (decentralized) intelligence that rules our lives and creates huge amounts of misery. AI will just be a gradual extension of this process:
This is a subtle and fascinating perspective. It is correct, perhaps, but this is more due to the perspective change than to any really new information. AGI may well result in the continuation of this process. But before true AGI arrives, this decentralized “intelligence” is unlikely to exterminate humanity, so I don’t think this frame is really helpful. Let’s just keep this in the frame of the previous question: it’s a political worry.
This is such a complex and uncharted domain. We don’t have the ability to foresee what will happen. Just look at how bad we are at predicting how technology will progress over 5 years, or how an election will turn out! It’s a waste of time.
Maybe. But the cost of trying is not huge. The opportunity cost of not trying could be massive. Let’s try.
AI / machines learning is totally over-hyped! We are so from from AGI it’s not worth talking about:
I hear you. There’s a huge hype cycle here, and it is amazingly annoying. And we are probably far from AGI (of course ‘far’ is a completely relative term). But, we are within 200 years of it at best. And maybe it’s much closer. The risk of not addressing this in time is existential. Read Yudkowsky here for why the transition to AGI might hard to see coming.
There really can’t be an intelligence much better than ours. We are already close to the global optimum.
The simplest argument against this, is that even if an AGI were simply equal to human intelligence, it would operate much faster, at lower cost, and more flexibly, which would render it massively superior. It seems very unlikely this sort of AGI is not achievable. Nick Bostrom and others cover this well.
Machines will always do what we tell them to. So no worries.
Do your current machines behave in predictable ways? The first lesson of software engineering is that even simple programs can be incredibly unpredictable. No one is smart enough to keep an AGI genie in the box.
We can work hard to create AGI that is under control, or at least safe.
This is equivalent to a genie who will grant your wishes safely. This genie is very hard to define.
Got more arguments?
I’m not surprised.
| Assumptions & Common Objections | 43 | assumptions-common-objections-102eb9230f07 | 2018-04-09 | 2018-04-09 14:36:49 | https://medium.com/s/story/assumptions-common-objections-102eb9230f07 | false | 1,709 | Speculation on the future of artificial intelligence | null | null | null | Somebody has to! | null | somebody-has-to | AI,MACHINE LEARNING,INTELLIGENCE,PHILOSOPHY | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Benjamin Stingle | Two souls look out through bars: One sees mud, the other, stars. | 9650a6270d4f | marojejian | 243 | 264 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-14 | 2018-08-14 13:24:29 | 2018-08-24 | 2018-08-24 17:28:42 | 4 | false | en | 2018-08-28 | 2018-08-28 17:45:11 | 7 | 10302ca9d62d | 6.349057 | 11 | 0 | 0 | The Snipfeed team is happy to introduce our first public tool to debunk fake news online: Fakebuster (chrome extension here). | 4 | Fakebuster: Fighting fake news with artificial intelligence
The Snipfeed team is happy to introduce our first public tool to debunk fake news online: Fakebuster (chrome extension here).
Fakebuster helps you to check how trustworthy an article is based on the writing style, facts mentioned as well as the headline. There is no magic here, and this should be only considered as a mere indicator. Trust and “fake news” are two complicated and ambiguous terms, there is no simple answer to address those issues. However, trained over hundreds of thousands of articles and checking in real time where and how a piece of news spreads, the algorithm gives most of the time a pretty good score.
As Mark Zuckerberg told legislators, artificial intelligence cannot stop fake news, and it could be five years or more until it might be able to do it. However, we believe we should give to the public state of art tools to help anyone figure out If they are in front of doubtable information. This is a crowdsourced effort which is why we ask everyone to report a mistake If the AI gave a wrong result on an article: It will expand our dataset and improve the algorithm. Keep in mind there is no comprehensive and quality datasets on fake news yet. Machine learning is one option among many others to solve this issue; blockchain could actually do much more.
Source: Late Night with Seth Meyers
Why should we care?
Until recently, we rarely heard the term fake news. People were skeptical but wouldn’t talk about a piece of news as “fake”, it was rather biased. The term has been popularized by President Trump’s rhetoric and describes in his words “a news that a big media outlet has changed in order to create a message”.
However, real-fake-news (stories completely built up) are now all over the place, including videos, and they spread at a faster path than real stories. Many say they have been used to manipulate the 2016 elections as Facebook’s own VP mentions bellow.
“The main goal of the Russian propaganda and misinformation effort is to divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well. We are quite divided as a nation”. Rob Goldman VP Ads @facebook
Let’s face it, from Facebook to YouTube fake news is an existential crisis for social media. As centralized platforms, they don’t have the visibility on how and where the content is distributed, especially in messaging apps such as WhatsApp where people consume 25% of their news. Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. But too often it has been by shifting the burden of identification to unpaid third parties like fact checkers. I can see this light bulb over your head…. blockchain could do that no? We will come back to It later.
Facebook has built some anti-fake news tools to try to tweak what its algorithms favor but this is a hard mission regarding the huge amount of content created every single day on the platform. According to an MIT study on fake news in Twitter, users are 70% more likely to retweet falsehoods than true facts. Simply put, fake news spreads quicker and reaches a wider audience than the truth… And there is no incentive to change current behaviors.
Study: On Twitter, false news travels faster than true stories
Source: MIT Sloan School of Management
But what is really “fake” and why is it such a boring debate for most people?
Let’s go back to the roots of the problem: fake news are as old as humanity because each person has a freedom to write, whether true or not. Those once were rumors; propaganda fact changes in order to make the enemy loose. There was news and “not news” — as denoted by comments of “that’s not news” . But today an individual can spread a fake news and people can even believe in them. This can spoil the reputation of an individual in seconds. Social media changed the game forever, what was once described as yellow journalism is content king because attractive headlines generate more likes and votes. So people started creating fake news as it brings more traffic as compared to real news. The problem today is that fake news means everything and nothing at the same time, they create a confusion on what is an opinion, what is a fact, what is propaganda. It’s a kind of inception, we start reading an article and as soon as we disagree we tend to do less and less effort with our brain categorizing it as fake news. As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”
Let’s have a look at Urban Dictionnary’s definition to clarify –clearly a reference for Gen Z and millienials isn’t it?- “fake news” are described as: “A piece of news which has been distributed by a news organisation which contains some form of dishonesty, typically to promote a political agenda. Fake news can be broken down into a number of categories, which are: Outright lie, lying by omission, lying by structure — the deliberate positioning of critical information at the end of a news report for instance- or selective outrage and emotive appeal”. This is why we have decided to include in our model not only one but three different components to compute how trustworthy an article might be. Those components are:
The facts: We check the reliability of some of the key facts in the article by comparing in real time what other news sources say about them.
The style: Thanks to a model trained on hundred of thousands of articles, this score tries to detect “fake news” style of writing (aggressive, ambiguous argumentation for instance).
The headline: We know headlines are supposed to tease the reader, but we also believe they should not mislead him. This score assesses how well the headline is related to the content.
The final score is rarely higher than 60% because the AI is still young and can’t really understand the concept of “opinion” which makes the value of journalists’ work.Our goal is not at all to say how good an article is but to give an indication that something might be wrong and you should double check it when the score bad.
Is there a solution?
Well there is no easy answer… Many things need to change If we want to create a sustainable system for both media and users. AI can certainly help by doing a pre-screening of articles and make sure we read story we have an interest into with recommendation algorithm. Those same algorithm can also offer the possibility for readers to have access to the context in a click and in a snackable format that fits with our expectations — like it or not nobody likes to scroll through very long Wikipedia articles-. This is what we do with our messenger prototype you can try here.
On the other hand Blockchain will help a lot in a near future: to spot fake news, reward curation and make monetization easier for publishers. Trolls would definitely be scarer with a blockchain network tracking identities, It would even be better at tracking and comparing the actions of various accounts. If one bad actor is in charge of a variety of accounts, blockchain can spot and stop this activity faster than machine learning can. The most exiting thing is however how we could incentive users to provide quality content. This is what Steem did when they launched their blockchain-based social media network that rewarded content creators with financial incentives. This platform works just like Reddit and gives tokens based on the number of up and down votes posts receive. Think of it as a “live” prisoner dilemma: since posters are financially incentivized to minimize down votes and are rewarded for quality content, the social media network is able to sort out bad actors without any external intervention.
This effort goes hand in hand with a relfexion on how should content be distributed online espcialy on mobile. The “news” format needs to change dramatically, we still offer users content experience similar to what they would find on newspaper while it needs to be entirely rethought. Publishers need to pursue their own avenues of distribution and monetization. Their need to acquire, retain and monetize is one of the most difficult challenge to master profitably and at scale, which is why most companies relied on Social media. But the latter were never meant for It as the change of algorithms has shown.
Snipfeed tries to tackle those issues building a game changing platform. Stay tuned.
If you want to learn more and have access to the scientific paper you can find more information in this article .
| Fakebuster: Fighting fake news with artificial intelligence | 179 | fakebuster-fighting-fake-news-with-artificial-intelligence-10302ca9d62d | 2018-08-28 | 2018-08-28 17:45:11 | https://medium.com/s/story/fakebuster-fighting-fake-news-with-artificial-intelligence-10302ca9d62d | false | 1,497 | null | null | null | null | null | null | null | null | null | Journalism | journalism | Journalism | 39,588 | Rédouane Ramdani | Bay Area, Entrepreneur, CEO & Co-Founder of Snipfeed Inc. | b4514c97e0d | rdouanermdn | 91 | 97 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 74d3d7d95404 | 2018-09-28 | 2018-09-28 17:54:07 | 2018-09-28 | 2018-09-28 13:55:30 | 1 | false | en | 2018-10-03 | 2018-10-03 19:01:03 | 14 | 1031c873db7b | 3.173585 | 0 | 0 | 0 | A cyborg artist; extreme biohacking; growing mini-brains in the lab; can Boston Dynamics sell their robots; and more! | 5 |
This week — a cyborg artist; extreme biohacking; growing mini-brains in the lab; can Boston Dynamics sell their robots; why AIs have a problem with the real world and why do they learn to smell; and more!
H+ Weekly is a free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. Subscribe now!
More than a human
Moon Ribas — Cyborg Artist
Meet Moon Ribas — an artist, a dancer and a cyborg. Her body connected to seismographs through implants which vibrate every time there is an earthquake somewhere around the world. In this video (16 minutes long) she tells her story of becoming a cyborg, what a life with an extra sense looks like and what message she wants to convey as a cyborg artist.
Extreme biohacking: the tech guru who spent $250,000 trying to live for ever
Serge Faguet takes biohacking to the next level. The Guardian details his daily routine and what he’s doing to his body. Plus explains what biohacking is and how does the biohackers community look like.
How Close Are We to Downloading the Human Brain?
Downloading your brain may seem like science fiction, but some neuroscientists think it’s not only possible but that we’ve already started down a path to one day make it a reality. So, how close are we to downloading a human brain?
Artificial Intelligence
Why Self-Taught Artificial Intelligence Has Trouble With the Real World
The short answer is that the real world is much messier and noisier than a game. Games represent a cleaner reality with clear goals, which makes them easier to master by self-learning algorithms. But some lessons learnt in the games can be applied to AIs and robots that need to engage with the real world.
New AI Strategy Mimics How Brains Learn to Smell
A great progress has been made in artificial intelligence research by trying to reproduce how our brains process visual data. This approach has its limits and to beat them, scientists are drawing inspiration from the sense of smell.
Instilling the Best of Human Values in AI
Now that the era of artificial intelligence is unquestionably upon us, it behoves us to think and work harder to ensure that the AIs we create embody positive human values.
Robotics
These Robots Run, Dance and Flip. But Are They a Business?
Boston Dynamics is probably the most recognisable robotics company in the world. Videos of their robots running in the forest or doing backflips have amassed millions of views on YouTube. As the company is preparing to start selling Spot Minis (their small robot-dogs), some people started to wonder who will buy those robots and if they do, what you can do with them? This leads to another question — will Boston Dynamics be capable of selling robots?
This Robotic Skin Makes Inanimate Objects Move
Researchers from Yale’s Soft Robotics lab, the Faboratory, have created a soft robotic skin full of actuators and sensor which you can apply to any inanimate soft object to make it move.
The Hunt for Robot Unicorns
The robotics industry is rapidly evolving and expanding. The field is full of new companies trying to obtain the status of a “unicorn” — a private company valued $1 billion and more. This article lists some of the possible candidates and gives hints for aspiring roboticists what should they focus on if they want to go really big.
DelFly Nimble an agile insect inspired robot
DelFly Nimble is what happens when roboticists take notes from nature. Inspired by fruit flies, this tiny robot is extremely agile when moving forward and sideways.
Biotechnology
Growing Brains in Lab
Brain spheroids are a relatively new creation. They are a lab-grown bunch of neurons at the early stages of forming a brain. Due to limitations of tissue engineering, brain spheroids are small but they are already finding usage in studying brain diseases.
Scientists Just Took A “Spectacular Step” Towards Lab-Grown Human Egg Cells
A team of Japanese researchers is now closer than any other scientists have come to creating lab-grown human egg cells. That means the day when we can “grow” humans could be fast approaching.
Thanks for reading this far! If you got value out of this article, it would mean a lot to me if you would click the 👏 icon just below.
Every week I prepare a new issue of H+ Weekly where I share with you the most interesting news, articles and links about robotics, artificial intelligence and futuristic technologies.
If you liked it and you’d like to receive every issue directly into your inbox, justsign in to the H+ Weekly newsletter.
Originally published at hplusweekly.com on September 28, 2018.
| H+ Weekly — Issue #173 | 0 | h-weekly-issue-173-1031c873db7b | 2018-10-03 | 2018-10-03 19:01:03 | https://medium.com/s/story/h-weekly-issue-173-1031c873db7b | false | 788 | A free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. | null | hplusweekly | null | H+ Weekly | h-weekly | TECHNOLOGY,TRANSHUMANISM,ARTIFICIAL INTELLIGENCE | hplusweekly | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Conrad Gray | Engineer, Entrepreneur, Inventor | http://conradthegray.com | e60a556ba1d4 | conradthegray | 633 | 102 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a2c8b344c70b | 2018-07-04 | 2018-07-04 10:55:23 | 2018-07-04 | 2018-07-04 11:01:40 | 1 | false | en | 2018-07-04 | 2018-07-04 11:13:46 | 7 | 10320e7a97b0 | 2.332075 | 14 | 0 | 0 | By Pon Swee Man | 5 | Why Opet Token is a Market-Sustainable ICO that Investors Should Keep Their Eyes On
By Pon Swee Man
The Many Uses of the Opet Token
It is not every day that you encounter an initial coin offering (ICO) with aims to deliver not one, but 5 breakthrough product features. The Õpet platform is slated to encompass wide-ranging features such as an AI chatbot, blockchain, machine learning, edutech — even branching out to the area of social enterprise.
As Õpet Foundation draws closer to its official ICO, much hype and curiosity has been garnered over the scalability and deliverability of the ambitious project. Will the Õpet platform be a revolutionary breakthrough for the education system, or will it be like other ICOs which sell themselves short?
In order to help investors make an informed choice, we’ve compiled a list of Õpet’s unique selling points.
1. A highly efficient and integrated global student records archive
Õpet encompasses a plethora of functions; and yet, the individual segments don’t overlap or compete with each other. This is the power of the loop blockchain: Õpet harnesses its capacity and traceability to incorporate a vast network of information and data on its platform. Everything on the network is well organised, immutable and tamper-proof, making it the prime location to store global academic records for bookkeeping, human resource and verification purposes.
Having a global archive of academic records in the blockchain will greatly speed up the efficiency of school applications as well as ensure that all records issued and stored are legitimate. Students no longer have to manually scrape together portfolios or scan documents to upload on school websites and emails — whatever they have will be safekept on the blockchain forever.
2. Large potential market size
Õpet’s targeted reach of students in high school and Grades 9–12 easily amount to 400 million. This colossal market size will only continue to grow as the world moves towards a knowledge economy, with education opportunities and technology increasingly rapidly in developing countries.
As students face stress from demanding curriculums and school club activities, the Õpet Bot, with its AI-enabled ability to self-learn the student’s curriculum, can better value-add to the students’ studies by acting as a private digital tutor — supplementing their knowledge gaps and value-adding to what they learn in school.
Should Õpet be able to capture a sizeable chunk of the knowledge-hungry global student market, its profit margin would be more than secured.
3. Opet Token’s many uses
Opet Tokens are utility tokens which can be used to activate many services like Õpet’s AI-enabled digital tutor chatbot or its record verification function.
Unlike other ICOs whose tokens are only valuable within the respective closed-loop blockchain, Opet Token has utility in the real world. This makes Opet Token highly versatile, value-added, and secure — not just a token that hinges on volatile cryptocurrency exchange values.
With the wide range of possibilities and real-world applications afforded to Opet Token holders, it’s no wonder that Õpet’s ICO is a rung above the rest in terms of its market sustainability and trustworthiness — two prized qualities investors should look out for in ICOs.
To find out more about Õpet, be sure to visit their social media sites below:
Official Website: https://Opetfoundation.com/
Twitter: https://twitter.com/Opetfoundation
Telegram: https://t.me/Opetfoundationgroup
Medium: https://medium.com/@Opetbot
Bitcointalk: https://bitcointalk.org/index.php?topic=3735418
YouTube: https://www.youtube.com/c/OpetFoundation
LinkedIn: https://www.linkedin.com/company/Opet-foundation/
| Why Opet Token is a Market-Sustainable ICO that Investors Should Keep Their Eyes On | 594 | why-õpet-coin-is-a-market-sustainable-ico-that-investors-should-keep-their-eyes-on-10320e7a97b0 | 2018-08-03 | 2018-08-03 09:38:41 | https://medium.com/s/story/why-õpet-coin-is-a-market-sustainable-ico-that-investors-should-keep-their-eyes-on-10320e7a97b0 | false | 565 | A blockchain project to enable seamless tertiary & college application and admission | null | opetfoundation | null | Opetfoundation | null | õpetfoundation | EDUCATION,EDUTECH,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,TECHNOLOGY | opetfoundation | Blockchain | blockchain | Blockchain | 265,164 | Õpet | Bringing AI and Blockchain Technologies into education, Õpet is revolutionizing students' lives, helping them to reach their full potential. | 8a81efd34a11 | opetbot | 137 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | dca47aab201b | 2018-06-06 | 2018-06-06 21:16:43 | 2018-06-07 | 2018-06-07 16:51:01 | 1 | false | en | 2018-06-07 | 2018-06-07 16:51:01 | 12 | 103249d41149 | 3.245283 | 39 | 2 | 1 | By Marcus Chang, Program Manager | 3 | Upcoming TensorFlow events in June and beyond
By Marcus Chang, Program Manager
Do you want to learn more about TensorFlow, chat with members of the team, and meet other developers? Join us at any of these upcoming events! Here are some of the places the TensorFlow team will be in June and beyond.
City of London — Photo by © User:Colin and Kim Hansen / Wikimedia Commons, CC BY-SA 4.0 (source: link)
CogX
When: June 11, 2018
Where: London, UK
The Future of Machine Learning is Tiny [Speaker, Pete Warden]
There are hundreds of billions of embedded devices in the world, and vast amounts of unused sensor data. Adding machine learning to these chips will add new interfaces to existing products, and enable entirely new classes of applications. Pete Warden, from Google’s TensorFlow team, will talk about the work being done to shrink deep learning code and models to run on these tiny devices, and why it’s so important.
Mobisys
When: June 15, 2018
Where: Munich, Germany
Emerging Techniques for Constrained-Resource Deep Learning [Speaker, Pete Warden]
Pete Warden will present an overview of quantization, model compression, block sparsity, architecture search, and other recent methods for running deep learning efficiently on mobile and embedded devices.
Google Developer Group (UTC Reading & Thames Valley)
When: June 21, 2018 (6:00 PM — 9:00 PM)
Where: Reading, UK
TensorFlow with Laurence Moroney
Laurence Moroney from the Google Brain / TensorFlow team will be in Reading to chat about some of the things he is working on there, particularly TensorFlow Lite.
CVPR
When: June 22, 2018
Where: Salt Lake City, UT
Emerging Techniques for Constrained-Resource Deep Learning [Speaker, Pete Warden]
Pete Warden will present an overview of quantization, model compression, block sparsity, architecture search, and other recent methods for running deep learning efficiently on mobile and embedded devices.
SciPy 2018
When: July 9–15, 2018
Where: Austin, Texas
Getting Started with TensorFlow [Speaker, Josh Gordon]
A friendly introduction to Deep Learning, taught at the beginner level. We’ll work through introductory exercises across several domains — including computer vision, natural language processing, and structured data classification. We’ll introduce TensorFlow, explore the latest APIs, discuss best practices, and point you to recommended educational resources you can use to learn more. Note from Josh: all the code will be online for folks who can’t make it IRL.
OSCON
When: July 16–19, 2018
Where: Portland, OR
TensorFlow Day at OSCON! (July 17)
The machine learning revolution is powered by open source, which is why we’re hosting a TensorFlow Day at OSCON this year! We’ll have a full day of talks from TensorFlow contributors, great demos, and an open hacking room where you can get hands-on with TensorFlow, and learn how you can be a part of the project. Registration is open both to regular OSCON attendees, and those with expo passes. For more details, see the OSCON web site.
Getting Started with TensorFlow [Speaker, Josh Gordon]
A friendly introduction to Deep Learning, taught at the beginner level. We’ll work through introductory exercises across several domains — including computer vision, natural language processing, and structured data classification. We’ll introduce TensorFlow, explore the latest APIs, discuss best practices, and point you to recommended educational resources you can use to learn more. Note from Josh: all the code will be online for folks who can’t make it IRL.
Google Cloud NEXT ‘18
When: July 24–26, 2018
Where: San Francisco, CA
Tensorflow, deep learning and modern convolutional neural nets, without a PhD [Speaker, Martin Gorner]
The hottest topics in computer science today are machine learning and deep neural networks. Many problems deemed “impossible” only five years ago have now been solved by deep learning: playing GO, recognizing what is in an image, or translating languages. Software engineers are eager to adopt these new technologies as soon as they come out of research labs, and the goal of this session is to equip you to do so. This session will focus on the newest developments in image recognition and convolutional neural network architectures and give you tips, engineering best practices, and pointers to apply these techniques in your projects. No PhD required.
Introduction to TensorFlow [Speaker, Laurence Moroney]
In this session, you’ll learn how you can easily get started with coding for Machine Learning and AI with TensorFlow. We’ll cover the basics of Machine Learning, and how you can build neural networks with no previous experience, and far more easily than you may have expected. This will give you the first steps on your journey towards understanding Machine Learning, Artificial Intelligence, and Data Science!
What’s New with TensorFlow [Speaker, Laurence Moroney]
As fast as Machine Learning and AI are evolving, so is TensorFlow. In this session you’ll get a tour of everything new in TensorFlow, from the release of TensorFlow Lite for Mobile and Embedded Systems, through Eager Mode for an easier programming model, to TensorFlow Hub which gives you a library of pre-trained models and beyond.
| Upcoming TensorFlow events in June and beyond | 194 | upcoming-tensorflow-events-in-june-and-beyond-103249d41149 | 2018-06-18 | 2018-06-18 12:44:24 | https://medium.com/s/story/upcoming-tensorflow-events-in-june-and-beyond-103249d41149 | false | 807 | TensorFlow is a fast, flexible, and scalable open-source machine learning library for research and production. | null | null | null | TensorFlow | tensorflow | TENSORFLOW,MACHINE LEARNING,DEEP LEARNING | tensorflow | Machine Learning | machine-learning | Machine Learning | 51,320 | TensorFlow | TensorFlow is a fast, flexible, and scalable open-source machine learning library for research and production. | b1d410cb9700 | tensorflow | 10,947 | 4 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-13 | 2018-05-13 21:52:46 | 2018-06-22 | 2018-06-22 03:01:01 | 1 | false | es | 2018-06-22 | 2018-06-22 03:01:01 | 2 | 10324e958dd | 2.969811 | 1 | 0 | 0 | Google, Samsung, Amazon. Todos éstos gigantes están listos para llevar a otro nivel el día a día de una persona. Poco a poco nos acercamos… | 5 | ¿White o Black Mirror?
Google, Samsung, Amazon. Todos éstos gigantes están listos para llevar a otro nivel el día a día de una persona. Poco a poco nos acercamos a las realidades de ciencia ficción y sin notarlo, ya forman parte de nuestras vidas.
Dejemos un poco de lado las historias de miedo que Black Mirror ha sabido introducirnos. La tecnología también traerá cosas favorables.
En el pasado Google I/O, la compañía nos maravillaba con los desarrollos que sacará en los próximos meses en todo el mundo. Lo que personalmente llamó mi atención fue el avance del Google Assistant, que fue capaz de realizar una cita (en inglés por el momento) para una persona “X”. Más allá de un diálogo muy fluído, incluyó modismos y expresiones que hacen aún más personal la conversación.
¿Deberíamos de pensar en el siguiente nivel de la prueba de Turing? O paramos un poco para esperar el alcance que llegarán a tener Siri y todos los asistentes virtuales.
No vayamos muy lejos, el hecho de que Amazon se haya establecido en México, nos da una idea de la transigencia que cada vez más personas tienen con el e-commerce. La apertura de su tienda en los Estados Unidos, que con el menor número de empleados y mayor automatización, es un reflejo de la siguiente escalada que busca.
Los precios bajan en cierta medida al prescindir de gastos en personal en casi todas las áreas y nos ofrecen mayor variedad de productos y de precios. Dependiendo de la compañía bien nos pueden ofrecer herramientas para comparar precios, calidades, etc.
Los envíos tampoco representan problema al tener una logística bien asentada. Bien con membresías o no. Desarrollos de tiendas virtuales con interfaces que emulan a las de un supermercado ordinario existen y siguen desarrollándose. Todo con el objetivo de incentivar a los más adversos a comprar en línea.
Sin embargo, con el auge de éstos entornos, y con los escándalos que se dieron con Facebook, el miedo ante la información que otorgamos, crece paralelamente. Aspectos que bien o mal, ya se están mejorando con una mejor lupa en pro del consumidor, que no siempre está al tanto de los términos y condiciones que acepta como un proceso de cajón. Sí, argumenté que trae muchas bondades, pero ya hemos visto algunos defectos.
Lo diré una vez más: nada pierden tapando la cámara frontal. Whops.
Muchos gadgets que prometen alertarnos acerca de enfermedades potenciales, mantenernos al tanto de nuestro estado de salud (como el inodoro inteligente), e incluso lanzarnos recomendaciones y motivaciones para llevar una vida de mayor calidad.
La tecnología no se limita a dispositivos electrónicos. Los procesos se han visto aún más nutridos por el desarrollo:
Métodos alternativos de tratar la materia prima en pro del medio ambiente. Métodos para elaborar materiales sintéticos con mejores características y aprovechando materiales nuevos.
Procesos que son menos violentos con las personas que los intervienen y que aseguran su bienestar. Procesos que durante su funcionamiento, paralelamente generen o depuren el entorno que nos rodea.
Procesos que nos lleven a una mejor nutrición, haciendo de hierbas y verduras que consideramos desagradables, y que nos conduzca a explorar nuevos alimentos, algo atractivo para todas las edades y personas, incluso para las que por procesos médicos como quimioterápias, no perciban sabor alguno: El caso de la fruta milagrosa.
La idea es optimizar nuestra vida diaria. Emplear menos tiempo en actividades que mediante la tecnología podemos hacer más cortas y disfrutar más (o bien prescindir de otros sistemas para ser realizadas).
El mismo paso va para la fuerza laboral. Se teme mucho, y desde hace tiempo, que se pierdan empleos. Es un hecho, pero se generarán nuevos. Tendremos la oportunidad de emplear nuestro tiempo en cosas que nos satisfagan más, que nos aporten más. Temas alrededor de ésto hay varios. Escribiré de esto, aún más a detalle próximamente.
Humanos 0 — Kuka 4
Avances que permiten a personas que no pueden procrear, ya sean personas transexuales, personas infértiles, con problemas en el aparato reproductor, etc. Pero que mediante el avance de transplantes de útero, la mejora de la inseminación in-vitro, entre otras, den como resultado la posibilidad de procrear, y hacerlo incluso mejor. “Mejorar a la especie” por medio de la ciencia.
Ya llegaremos al punto en que lás máquinas lleguen a la singularidad, o quizá no, pero tendremos encima una serie de temas que tratar como sociedad en torno a políticas, ética, filosofía, economía y todo aquello que intervenga con ésta nueva revolución industrial.
| ¿White o Black Mirror? | 1 | white-o-black-mirror-10324e958dd | 2018-06-22 | 2018-06-22 03:01:01 | https://medium.com/s/story/white-o-black-mirror-10324e958dd | false | 734 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | Lilián Zamora | Industrial Designer and entrepreneur. Science, technology, architecture and arts are also my passions. | 99ab8cc8a0d9 | lianzave | 5 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-13 | 2018-03-13 03:00:14 | 2018-03-13 | 2018-03-13 03:05:06 | 0 | false | en | 2018-06-09 | 2018-06-09 13:34:32 | 7 | 103315c4afa9 | 0.128302 | 25 | 0 | 0 | Table of contents | 2 | Machine Learning
Table of contents
ประเภทของ Machine Learning (https://goo.gl/c5PnLE)
Machine Learning เบื้องต้น( https://goo.gl/1MfxpV)
ทฤษฎี Linear Regression เบื้องต้น( https://goo.gl/j9FcaJ)
Long(ลอง) Code กับ Linear Regression(https://goo.gl/z61eFK)
เมื่อความผิดพลาดกำหนดชะตา Linear Regression(http://bit.ly/2qvKIqN)
ทฤษฎี Logistic Regression เบื้องต้น(http://bit.ly/2KDzisK)
K Nearest Neighbor เบื้องต้น(http://bit.ly/2JtlalB)
| Machine Learning | 34 | table-of-contents-machine-learning-theory-103315c4afa9 | 2018-06-11 | 2018-06-11 03:21:52 | https://medium.com/s/story/table-of-contents-machine-learning-theory-103315c4afa9 | false | 34 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Pongthep Vijite | Lead Engineer at Tencent (Thailand) | 4fa507dc4a5c | pongthepv | 207 | 34 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-24 | 2017-11-24 12:09:26 | 2017-11-24 | 2017-11-24 11:37:36 | 1 | false | en | 2017-11-24 | 2017-11-24 12:12:26 | 3 | 10333d773a20 | 0.916981 | 6 | 0 | 0 | Hey | 4 | Explore Medium Writing to Earn Free ARF Tokens
Hey
Your writing has got more to explore.
Here is how you can earn free tokens from Airfio.
Airfio is the advanced cryptocurrency startup, designed with artificial technology. The platform is announcing its official token earning program. Write a post on Medium with regards to Airfio and earn free ARF tokens free.
Follow this link to participate: https://goo.gl/VdxvEU
Select any Topic from below list
· “Airfio ICO Starting from 5-Dec 2017 and ends at 3-Jan-2018 “
· “Airfio ICO launch “
· “The Biggest ICO launch with AI technology “
· “How Interesting is Airfio ICO?”
· “Airfio Influencing machine learning in Concurrency “
Note:
1. You are free to write on your own topic with regards to Airfio (T&C apply)
2. Don not forget to mention airfio.com in your article
1–200 Likes — 8 ARF tokens
200–700 Likes 12 ARF tokens
700 + Likes — 20 ARF tokens
Article details
Word Count: 300 and above
Language: Any
Visit here to know more about Airfio: http://airfio.com/
You can also participate in our Affiliate program — https://airfio.com/r
All the best!
| Explore Medium Writing to Earn Free ARF Tokens | 68 | explore-medium-writing-to-earn-free-arf-tokens-10333d773a20 | 2018-01-30 | 2018-01-30 19:05:14 | https://medium.com/s/story/explore-medium-writing-to-earn-free-arf-tokens-10333d773a20 | false | 190 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | airfio coin | Airfio is a decentralized platform presenting future of crypto banking by integrating Artificial Intelligence in crypto world. | 83a61c3bc5b4 | arfcoin | 18 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 81cc2bea992c | 2017-11-08 | 2017-11-08 05:25:44 | 2017-11-16 | 2017-11-16 09:26:03 | 4 | false | en | 2018-03-05 | 2018-03-05 11:35:12 | 1 | 10345496b61d | 5.383019 | 0 | 0 | 0 | Will Artificial Intelligence result in an explosion of creativity or will it turn us into mindless drones? | 5 | A Franchise Called Creativity
Will Artificial Intelligence result in an explosion of creativity or will it turn us into mindless drones?
Artificial intelligence is here and so are numerous theories about humans being replaced by machines. However, one industry deemed safe from the clutches of the replacement narrative is creative design. It seems most machine-created designs have limitations, and that a human touch adds much more to the quality than a machine ever could.
This could be attributed in part to empathetic traits human beings possess as opposed to finalising a visual design based on digital inputs alone. But how long before AI comes up to speed, and enters the realms of creativity is a question being explored relentlessly.
I am weary of the debate of AI-vs-humans. It seems every new conversation or news article either seems to extol technical excellence or incite fear to the point of mental exsanguination.
I am an aspiring musician myself and to me the fact that a machine can create music with digital inputs is an exciting proposition. The reinforcement it can present to a creative mind holds much allure. In my multiple readings on the music, I couldn’t help but see an interesting analogy of music creation, how artistic creativity has evolved through the years.
In the dawn of 1920’s, Steinway Pianos crafted and sold more than 400,000 pianos. That was the age where the only way a household could enjoy music was when it was played live, as part of a play, a festival, or if they owned the said instruments themselves at home.
Such was home entertainment, which involved buying expensive equipment and taking rigorous lessons before you could produce music fit to entertain.
The amount of creativity required to create a musical piece is significantly high and this was mostly relegated only to individuals who could spend enough time and effort to that end. Amidst the myriad piano makers, Steinway led the market, a legacy it maintains to this day, although the company seems to have fallen on hard times.
The decline of piano-making can be attributed to the advent of radios. With more and more people gaining access to home entertainment systems in the form of a radio, people no longer needed cumbersome lessons, or own heavy equipment at home which needed special care. As more and more adapted to radios in their households, the need for pianos gradually decreased, thus eroding the niche art of piano making. As of 2015, Steinway sold only 33,818 pieces of pianos.
The invention of television did the same to radio, bringing a further shift in the way home entertainment was perceived, a concept immortalised by the popular track ‘Video killed the Radio Star.’ Although the radio still exists, a significant portion of home entertainment media was eventually dominated by television. As ubiquitous as the television became, another challenge awaited the media in the form of internet.
The boom in content after the advent of internet has been exponential. Like the proverbial Prometheus’ fire, it has enabled every layman to potentially become popular, create content and share with the world at the click of a button, or a swab of the finger.
The internet, photography through cellphones, and online sharing has provided a much-needed platform through which people can share content on the go. Creativity no longer rests in the realm of the artists. Technology has made everyone into a potential artist, so much so that it has become impossible for a human being to consume all of that in his/her lifetime — whether the content is good, bad, or mediocre is a debate for another day. Technology essentially opened up multiple avenues in the field of creative expression.
It is my hope that artificial intelligence too will become a medium similar to the internet. The democratisation of such a technology can yield unprecedented highs in human creativity. While public implementation is still a long-ways off, AI has more or less entered our domains through the myriad cellphone apps and similar augmentations. We can cite numerous examples of how AI has helped organise our lives better, provide valuable suggestions, and improve our quality of life marginally.
AI tools have become invaluable in the realms of artistic creativity as well. Photography, combination filters, writing aids, musical notation tools, tempo matchers, and combination filters have been of great help to the artist.
For the most part, I too have benefitted considerably from using digital tools. However, my experience as a musician has been somewhat different. As a guitarist, in the course of my practice, I execute a specific bend of the guitar strings, which gives songs an additional edge, a punch, something I employ in all the covers I make.
Recently I started using an application, which helps me compose and keeps a check on timing, providing instant feedback of my playing. It tells me where I’ve missed and where I am going wrong. While it has its share of pitfalls, all in all, it is an excellent tool for a learner and composer. There is also an AI element in this, which recognises portions where I lag and then tailors the exercises in such a way that the difficult parts are automatically slowed down, helping me cope.
But every time I execute a bend, a warning notification tells me that I am hitting the wrong notes. For the first few times, I genuinely believed that I was hitting the wrong note. However, when I attempted playing the same with the original soundtrack, it sounded great. This was when I realised that the fault does not lie with me but with the tool itself, due to its limited parameters and its inability to process ‘aberrations’.
For in essence, my unique contribution to a song is considered an aberration by its templates. A few months of such practice and in all probability, I will lose my ability to execute note bends.
And so I arrive at the question, one more philosophical than objective, as to how a technology like AI will function in the realms of creativity?
A big advantage of AI technology is the ability to create frameworks around complex teaching methods. Previously, traditional analog metronomes and foot-tapping were the norm of learning the rhythm of any instrument. However, some of the best forms of music came from digressions of the norm. Some of the best musicians in the world claim to have little to no theoretical knowledge.
I will not lie, for I myself have benefitted from the frameworks technology has had to offer. My rumination is internal, for I am aware that a larger number of guitarists will result in the explosion of art, when they can all collaborate.
But now we have sophisticated software that can time moves appropriately and aid in learning. Will we see fewer Hendrixes, Petrucci’s and Amotts? When there are templates and frameworks around every creative art form, when there are machines that can replicate parameters of song construction (which already exist) and when the large-scale democratisation of such products occur, what influence will it have on creative arts?
This idea of artificial intelligence negating things unique about potentially human beings concerns me, in my own irrational fashion. Will the wave of automation that improves efficiencies by manifolds take the panache out of the art itself?
If creativity becomes a readymade solution, will there be anything creative left in the world?
| A Franchise Called Creativity | 0 | a-franchise-called-creativity-10345496b61d | 2018-03-05 | 2018-03-05 11:35:13 | https://medium.com/s/story/a-franchise-called-creativity-10345496b61d | false | 1,241 | Musings of a troubadour masquerading as an normal human being. | null | null | null | Craynonymous | craynonymous | GANESH CHAKRAVARTHI,CRAYNONYMOUS,CRG | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Ganesh Chakravarthi | Cyclist, Guitarist, Writer, Editor, Tech and Heavy Metal enthusiast — Jack of many trades, pro in two. | c2f5689a2729 | ganeshcr | 105 | 22 | 20,181,104 | null | null | null | null | null | null |
|
0 | # some example feature engineering and data cleaning
# lotfrontage 'NA' is to be reclassified as 0, ie there is no street area facing the house
housef['LotFrontage'] = housef['LotFrontage'].fillna(0)
# converting the year values by subtracting 1872 from them (as this is the year the oldest house was built)
# this is in effect to do some scaling of the data
housef['YearBuilt'] = housef['YearBuilt'] - 1872
housef['GarageYrBlt'] = housef['GarageYrBlt'] - 1872
housef['GarageYrBlt'] = housef['GarageYrBlt'].fillna(0)
housef['YearRemodAdd'] = housef['YearRemodAdd'] - 1872
# cross validating the model
scores = cross_val_score(lr, Xstrcomb, houseftargetcombtr, cv=10)
print "Cross-validated scores:", scores
print "Mean CV R2:", np.mean(scores)
print 'Std CV R2:', np.std(scores)
fixedaccuracy = np.mean(scores)
predictions = cross_val_predict(lr, Xsttcomb, houseftargetcombtt, cv=10)
r2 = metrics.r2_score(houseftargetcombtt, predictions)
print "Cross-Predicted R2:", r2
# trying to pring the coefficients of the lasso regularisation
lasso.fit(housefmodelcombpredtr, houseftargettr)
lasso_coefs = pd.DataFrame({'variable':housefmodelcombpredtr.columns,\
'coef':lasso.coef_,\
'abs_coef':np.abs(lasso.coef_)})
lasso_coefs.sort_values('abs_coef', inplace=True, ascending=False)
lasso_coefs.head(20)
# sample feature engineering, to convert the different 'condition' values into ranked numerical values
houser = houser.replace({"BsmtCond" : {"No" : 0, "Po" : 1, "Fa" :\ 2,"TA" : 3, "Gd" : 4, "Ex" : 5},\
"BsmtQual" : {"No" : 0, "Po" : 1, "Fa" : 2,\ "TA": 3, "Gd" : 4, "Ex" : 5},\
"ExterCond" : {"Po" : 1, "Fa" : 2, "TA": 3,\ "Gd": 4, "Ex" : 5},\
"ExterQual" : {"Po" : 1, "Fa" : 2, "TA": 3,\ "Gd": 4, "Ex" : 5},\
"FireplaceQu" : {"No" : 0, "Po" : 1, "Fa" :\ 2, "TA" : 3, "Gd" : 4, "Ex" : 5},\
"GarageCond" : {"No" : 0, "Po" : 1, "Fa" : 2\ ,"TA" : 3, "Gd" : 4, "Ex" : 5},\
"GarageQual" : {"No" : 0, "Po" : 1, "Fa" : 2\ ,"TA" : 3, "Gd" : 4, "Ex" : 5},\
"HeatingQC" : {"Po" : 1, "Fa" : 2, "TA" : 3,\ "Gd" : 4, "Ex" : 5},\
"KitchenQual" : {"Po" : 1, "Fa" : 2, "TA" :\ 3, "Gd" : 4, "Ex" : 5},\
"PoolQC" : {"No" : 0, "Fa" : 1, "TA" : 2,\ "Gd" : 3, "Ex" : 4}})\
# sample replacement of missing values
houser[['MasVnrArea']] = houser[['MasVnrArea']].fillna(value=0)
# downsampling the majority class
df_majority_downsampled = resample(housenormal,replace=False,\
n_samples=125,random_state=123)
| 15 | null | 2018-05-23 | 2018-05-23 02:43:55 | 2018-05-23 | 2018-05-23 06:38:06 | 9 | false | en | 2018-05-24 | 2018-05-24 05:23:33 | 0 | 10350724676c | 7.74717 | 2 | 0 | 0 | Project 3 introduced us to the meat of the course, statistical modelling and analysis. The project was split into three parts, and required… | 2 | DSI Project 3 — Regression and Classification with Housing Data
Project 3 introduced us to the meat of the course, statistical modelling and analysis. The project was split into three parts, and required us to utilise the ‘Ames Housing Data’ available on Kaggle to come up with models relating to housing prices. The Capstone Project notwithstanding, this was the most time consuming assignment for me amongst the different assignments that we were given.
Part 1
For the first part, we were asked to build a model to estimate the value of homes from fixed characteristics. The said ‘fixed characteristics’ were not defined, and it was up to us to distinguish which features were fixed and which were renovate-able.
The initial DataFrame that we had to use
A core component of this assignment was the cleaning of data and feature engineering, both of which can be broadly categorised as pre-processing. The good news here was (as with most other cases involving datasets available on Kaggle) there were a number of references which we could obtain insights from. The bad news was there is really no right or wrong when it comes to the pre-processing, just plenty of best practices (particularly for the feature engineering portion).
For ease of analysis, I decided to split the data for this part into numerical and categorical variables for separate handling, before combining them into a single Dataframe for the final modelling. For the numerical variables, I utilised the feature correlations against price in deciding which variables to include in the model.
Using the Seaborn Heatmap function to visualise correlations
I then chose the categorical variables by selecting those that had enough unique values to warrant further interest. This meant analysing the features individually, and also eliminating the features that had near-zero variance. Dummies were obtained for these categorical variables for them to be eventually used in the regression model, since linear regression only accepts numerical variables.
Analysing the categorical variables
The next step was then to combine the numerical and categorical features which I had selected into a single Dataframe, and carry out the modelling. The data from 2010 was segregated as the testing set, with the remaining data used for training of the model. Scaling was done prior to the modelling to ensure that variables of different magnitudes did not get misrepresented in the final model.
My model had a cross validated R2 score of 0.78, which I felt was considerably high enough to require no further iterations. To simplify it further, I ran it through a final round of lasso regularisation to check which of the features were really relevant. This was my preferred method over other feature selection methods such as K-Best and RCFE (since it incorporated the feature selection as part of regularisation to counter overfitting). As it turns out, some of the features such as the year the house was built in were far more important predictors of the sale price than other features such as the lot area that the house was built on.
Results of Lasso Regularisation
To visualise the regression output of the model which I had built, I decided to create one last ‘Jointplot’ using the Seaborn library. This essentially shows how my model predicts the prices relative to the actual prices observed in the test set that was set aside earlier.
Joinplot indicating results of final model
Part 2
The premise for the second part of the project was as follows. Once the renovate-able features have been identified, the relative importances of these features in predicting the sales price (which could not be explained by the fixed features in the earlier part) could be determined. This would then indicate which features within the houses could be potentially renovated, so that appropriately priced houses could be purchased and renovated by the in-house renovation team before being sold for a profit.
As I did not complete the pre-processing of the entire set of data earlier on, a fair bit of time was spent cleaning and engineering the data for the renovate-able features before I could proceed to the modelling stage.
Selecting features by eliminating those with near zero variance
As with the earlier part, I then ran a linear regression of the selected features against the remainder of the sales price not explained by the model in part one. After running a regularisation using Lasso, I determined that the four most important renovate-able features were Overall Quality, Fireplace Quality, Kitchen Quality and Basement Quality. The relationship that these features have against the remainder of the sales price is plotted below.
I figured I’d also make use of the ‘statsmodel’ package to validate these results. This package allows for a summary to be published which includes data such as the p-value and t-value. These can also be used to eliminate features which may not have statistically significant predictive powers, before carrying out another iteration of modelling if necessary.
Output from Statsmodel
From a business perspective, I felt that the regression model that I had created would not be immediately applicable in the real world for the following reasons. 1) Cost of doing up renovations varies by renovation type, and this has not been factored in even if the labour for renovation is cheap. 2) Economy wide factors such as lending rates and rate of income growth may impact overall attractiveness of housing market and willingness of buyers to spend on posh/done-up features that are of higher quality.
Part 3
The last part of the project was an introduction to classification. We were asked to determine which property characteristics were predictive of an ‘abnormal’ sale. The section required an understanding of both how to deal with imbalanced classes (given that the ‘abnormal’ category was actually a minority class) as well as different classification techniques. I will not delve into the classification techniques that I used in this article, as I will be able to elaborate this in greater detail in my future write-up relating to my Capstone Project.
Given that the ‘abnormal’ category was a minority category, I decided to downsample the majority category of ‘normal’ so as to create balanced classes. This balanced set of data was used in creating the classification model. An alternative to this would have been to upsample the minority class, or carry out a combination of both techniques.
Initial distribution of different classes
After completing the classification model on the balanced data, my findings were as follows. Kitchen quality and number of bathrooms in the basements were positive predictors of ‘abnormal’ sales prices, whilst age of the house was a negative predictor (ie the older the house, the less likely that it will have an ‘abnormal’ sales price). Without rebalancing of the data, the model would have likely been biased towards the majority class, and hence lacking in precision and accuracy (terms I will elaborate in greater detail in my future write up on my Capstone Project).
So what were my key takeaways from this assignment? Structure and clarity of mind are extremely important when carrying out a data science project. There are plenty of great resources out there highlighting Data Science workflows and roadmaps to utilise, but in my brief experience thus far the flow can be generalised as follows.
Define Objectives
ETL (Extract, Transform, Load) the Data
Carry out EDA and Data Cleaning
Build the Baseline Model
Repeat steps 3 and 4 in an iterative manner as required
Summarise and Communicate Results
One of the personal lessons that I learnt was that quality pre-processing of data can have a significant impact on the eventual results obtained, as cumbersome and un-sexy a step as it might be. I think many of us new to the field make the mistake of jumping straight into the modelling stage without understanding or cleansing the underlying data sufficiently enough.
Another key piece of advice from my instructor which served me well was to start out with simple modelling, and then iteratively move onto more complex models. For someone like myself fairly new to coding, this also helped in establishing confidence, a lot like how firms start off with building Proof-of-Concepts (PoCs) before scaling up or engaging in more complex tasks.
Though this assignment was time consuming, it also prooved to be highly rewarding, as I got to experience a fairly complete data science work-cycle. With some basics under the belt, the fun was just beginning, as I would soon find out in my next assignment.
| DSI Project 3 — Regression and Classification with Housing Data | 55 | dsi-project-3-regression-and-classification-with-housing-data-10350724676c | 2018-05-24 | 2018-05-24 18:44:29 | https://medium.com/s/story/dsi-project-3-regression-and-classification-with-housing-data-10350724676c | false | 1,735 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Abdur Raheem Basith | Student of life | Data aficionado | 84c5ea61f0ef | arbasith | 26 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 75e8b94ca456 | 2018-08-07 | 2018-08-07 06:09:00 | 2018-08-08 | 2018-08-08 14:27:37 | 6 | false | en | 2018-08-10 | 2018-08-10 09:28:04 | 12 | 103545683edf | 2.946226 | 1 | 0 | 0 | For us here at Coinscious, this has been the summer of conferences and expos. We’ve been on an international tour — Amsterdam, Atlantic… | 5 | The Coinscious Summer Tour — July
For us here at Coinscious, this has been the summer of conferences and expos. We’ve been on an international tour — Amsterdam, Atlantic City, Montreal, San Francisco, DC, and Krakow. Our goal: to show businesses the new Coinscious Collective™ platform and services, and to connect with thought leaders doing groundbreaking work with blockchain and cryptocurrency.
First up was the Blockchain Expo in Amsterdam, packed with 8,000 attendees. Day one focused on ICOs and cryptofinance, while day two focused on the effect of blockchain on insurance, banking, and payment options. Both days delivered insight on how blockchain tech is transforming existing business models. There was also significant focus on challenges developers face as they try to build blockchain apps.
David Buell (Coinscious Marketing Director) and Binh Ho (Coinscious COO) at the Blockchain Expo in Amsterdam, Netherlands
Between conferences, our CTO, Daniel Im, made his way to Atlantic City for the Blockchain World Conference. He listened to leaders weigh in on potential US blockchain regulations. There he also offered his congratulations to Amir Dossal for winning the inaugural Blockchain Humanitarian award, an award created to honor leaders who are using blockchain technology to solve real-world challenges.
Allison Shealy (Attorney at Shulman Rogers), Daniel Im (Coinscious CTO), and Amir Dossal (‘Blockchain Humanitarian’ Award Winner)
From there, we attended Startupfest in Montreal. This has been called “a music festival for startups” by Reddit Founder Alexis Ohanian. The event takes pride in rethinking how conferences should be done. It’s a place where startups can shine, make connections, meet interested investors, and learn from others who have successfully navigated their own startup journey. While there, our CTO, Daniel Im, discussed how to bridge the gap between AI research and the AI industry.
Daniel Im (Coinscious CTO) at Startupfest Montreal
Next, it was on to San Francisco for Distributed 2018, where we were a sponsor. Distributed focuses on the power of the decentralized business and aims to bridge the gap between Eastern and Western blockchain initiatives. At one panel, leading cryptocurrency exchange leaders discussed the quickly changing landscape surrounding the coin market, its infrastructure, regulations, and oversight.
Coinscious was a sponsor at Distributed 2018 in San Francisco. Shown here are Binh Ho (Coinscious COO), Agnieszka Osuch (Conscious Communications Manager), Daniel Im (Coinscious CTO), and Ena Vu (Coinscious Project Manager).
Cointime interviewed Coinscious at Distributed 2018. With Ethan Skowronski-Lutz, David Buell (Marketing Director), Binh Ho (COO), and John Gidding
Also at Distributed, Daniel connected with Bianca Chen, who is currently executive producing the docuseries Next: Blockchain.
Bianca Chen (“Next: Blockchain” Executive Producer— middle left), Daniel Im (Coinscious CTO — middle right)
In Krakow, Poland, we hosted an event for local blockchain and cryptocurrency enthusiasts. Our COO, Binh Ho, spoke to attendees about AI and data-driven insights for the coin market.
Binh Ho (COO) hosting a blockchain event in Europe
On this tour, we met a lot of amazing people and companies. These interactions showed us that, now more than ever, there’s a need for our platform. The blockchain industry is really starting to develop. Thought leaders and professional and amateur investors are looking for a platform like the Coinscious Collective™ that can help them navigate and respond to the nuances, complexity, and volatility of the coin market.
Our whirlwind summer tour isn’t over yet. During August, we’ll be in Asia, connecting with people passionate about blockchain and cryptocurrency. We’ll also be in Las Vegas at BlockShow. Be sure to follow our journey on social media:
Coinscious (@coinscious_io) | Twitter
The latest Tweets from Coinscious (@coinscious_io). AI & Data Driven Insights for the Coin Marketwww.twitter.com
Coinscious | Facebook
Coinscious. Follow us on Facebook!www.facebook.com
Coinscious (@coinscious) * Instagram photos and videos
46 Followers, 1 Following, 11 Posts - See Instagram photos and videos from Coinscious (@coinscious)www.instagram.com
Coinscious Chat | Telegram
You can view and join @coinscious_chat right away.t.me
| The Coinscious Summer Tour — July | 2 | the-coinscious-summer-tour-july-103545683edf | 2018-08-10 | 2018-08-10 09:28:04 | https://medium.com/s/story/the-coinscious-summer-tour-july-103545683edf | false | 529 | Coinscious - AI & Data Driven Insights for the Coin Market | null | coinscious | null | Coinscious | coinscious | CRYPTOCURRENCY,BLOCKCHAIN,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,BITCOIN | coinscious_io | Blockchain | blockchain | Blockchain | 265,164 | Coinscious | AI & Data Driven Insights for the Coin Market, www.coinscious.io | 7cb1645a3923 | coinscious | 9 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-18 | 2018-04-18 15:20:47 | 2018-04-18 | 2018-04-18 15:26:20 | 1 | false | en | 2018-04-18 | 2018-04-18 15:26:20 | 0 | 1035c8fe0873 | 4.392453 | 0 | 0 | 0 | In the mid twentieth century, artificial intelligence researchers invented a new type of computational system that could detect patterns in… | 5 | Machine Yearning: The Rise of Thoughtful Machines
In the mid twentieth century, artificial intelligence researchers invented a new type of computational system that could detect patterns in images — a daunting task for previous technology. Because this new system comprised highly interconnected information-processing nodes, resembling the organization and function of the brain, it became known as an artificial neural network.
At that time, neuroscience was still in its infancy, and the understanding of the brain was limited. Scientists knew that neurons could pass signals to other neurons. They had some idea that the connections between neurons were flexible, and that connection strengths could change. And by peering at cells through a microscope it was easy to extrapolate that the total number of neuronal connections in the brain was astronomical. But basic information about the brain’s operation was still mysterious. Nobody had a clue how the human brain’s 89 billion neurons were subdivided into functional groups, how electrochemical fluctuations encoded information, or how neural circuits processed electrical signals. Thus, the similarity between artificial neural networks and biological neural networks didn’t extend very far.
At least, it didn’t initially.
Today, neural networks resemble biological brains more vividly. These artificial systems can perform complicated tasks with surprising intelligence: Researchers are currently developing systems that can learn how to drive a car just by observing a human driver, or that can cooperate seamlessly with humans to solve problems jointly. And the secret to the performance of these advanced neural nets is a complex and inscrutable system of connections buried in so-called hidden layers. The more hidden layers a deep learning neural network has, the more remarkable its problem-solving ability — and the less anyone can understand how it’s working.
Hence, we have reached a peculiar stage in the history of technology wherein the researchers designing systems are also desperately trying to understand how they work.
To investigate the intricate computation occurring deep inside neural nets that classify images, for example, one strategy involves systematically feeding the network different images and singling out one hidden node at a time to find out what image properties cause that node to activate. In a neural net that can identify cupcakes in photos, there might be a hidden node that responds to blue stripes angled at 45 degrees. Or, there might be a node that responds to pink frosting in the center of the frame. By discovering the image properties uniquely recognized by each of many hidden nodes, researchers can start to piece together the function of the hidden layers, and how the composition of these layers can decode information about the image — from pixel to cupcake.
This same strategy is a staple of neuroscientific research. Foundational studies of the brain’s visual system homed in on the precise properties of light and the visual field that activated specific neurons in different regions of the brain. With this method, neuroscientists learned that there are numerous brain areas in the visual system that each respond to different aspects of visual images — some neurons encode the region of space that a visual stimulus inhabits, some neurons encode colors, and other neurons encode more complex properties like object identity. And now that these neurons’ functional properties are clear, neuroscientists are able to form theories about how different visual areas connect, work together to decipher visual information, and distribute it throughout the rest of the brain.
It seems then that neural networks are more aptly named than their inventors ever realized. Neural network researchers are using a strategy to study their creations identical to one neuroscientists use to study the brain, which leads to some thought-provoking speculation: What other neuroscientific research methods could be useful for studying neural networks?
It’s possible to imagine how fMRI, tractography, optogenetics, or event-related potential techniques could be tailored to the study of neural networks. In neuroscience, these popular and powerful methods each capture a different type of data, and so can be used to test different types of hypotheses. The brain is too complex to ever yield complete knowledge of every neuron’s activity at every moment in time, so research questions focus on specific aspects of neural operation: the location of activity in the brain, whether a type of cell is necessary for some behavior, or the time course of a specific neural process. Then, findings from different research programs can be compared and woven together to form a theoretical understanding of how the brain works. This same broad strategy could be applied to the study of artificial neural networks, the ever-increasing complexity of which also thwarts detailed mechanistic understanding.
If we extrapolate further, to the bleeding edge of neuroscience, we tread into the realm of science fiction. Neuroimaging technologies have been steadily advancing, but the most methodological progress is being made in data analysis. Using the same fMRI data that has been available for decades, neuroscientists are now devising sophisticated statistical tools to answer new questions that were once thought to be unapproachable. Many of these advanced analytical tools, such as multi-voxel pattern analysis, support vector machines, and representational similarity analysis are machine learning applications — they are powered by the same technology that drives artificial neural networks. So, if researchers studying artificial neural networks find success in the adaptation of neuroscience methods to their own work, their efforts might eventually include these recent machine learning applications, at which point neural networks would be deployed in the analysis of themselves.
Introspection, the capacity to gaze inward and reflect on the very mental processes that underlie our inquisitiveness, is often considered to be a defining trait of humanity that sets us apart from other animals. But if advanced neural networks can be directed to analyze their own functioning, would that change how we view ourselves? Would artificially intelligent systems need to be recognized on equal standing with us? Or would we simply need to strike one possible essentially human trait off of the ledger of human nature?
Before we start worrying about losing our unique place in the universe, we can take some small comfort in one likely scenario. Namely, it’s possible that self-reflective neural networks would be more successful in deciphering their functioning than we are as humans. As the great American psychologist William James described, our introspection is “like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see the darkness.” In other words, we have the capacity for introspection, but true introspective understanding is elusive. So our uniqueness would then be preserved: In the club of ineffectual self-reflection, we could still be the sole members.
| Machine Yearning: The Rise of Thoughtful Machines | 0 | machine-yearning-the-rise-of-thoughtful-machines-1035c8fe0873 | 2018-04-18 | 2018-04-18 15:26:22 | https://medium.com/s/story/machine-yearning-the-rise-of-thoughtful-machines-1035c8fe0873 | false | 1,111 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sean Noah | I’m a PhD student at UC Davis, studying the neural mechanisms of attention. | 6ad7d8b4192a | seannoah | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | 你拍手也好,不拍手也罷。
我慢慢寫,你慢慢看。
真的特別喜歡的,請留言告訴我,我會盡力多寫一點。
也歡迎來臉書跟我聊聊,但麻煩先告訴我你是誰 :)
| 4 | 18dfd162b7b3 | 2017-12-24 | 2017-12-24 18:30:21 | 2017-12-24 | 2017-12-24 19:27:28 | 11 | false | zh-Hant | 2018-08-17 | 2018-08-17 17:26:42 | 3 | 103722463200 | 2.243396 | 286 | 3 | 0 | 從 Google UAC 廣告(通用應用程式廣告)中,我得到的警示--AI 人工智慧如何取代了我?為什麼?以及這又會帶來什麼樣的威脅及影響呢? | 5 | 被取代的進行式:我被 Google 取代的鬼故事
前言:從 Google UAC 廣告中,我得到的警示
在工作上,我除了寫文案、教教簡報餬口飯吃之外,我也會接代操 Google 的關鍵字廣告的案子。
但在最近,Google 推出了一種新的應用程式的關鍵字廣告,稱作「通用應用程式廣告活動」,英文為 「Universal App campaigns」,縮寫為 UAC 廣告。
這聽起來沒有什麼特別的,不過就一種新廣告樣式嘛,不值得一提。然而,這個廣告給我的衝擊確實十分重大,可能堪稱我在 2017 年影響我觀念最大的事件之一。
儘管有些驚魂未定,也不確定自己是不是大驚小怪,總之,先留個紀念吧。
AI 怎麼取代我部分的業務?它是怎麼做的?而我又能怎麼做?
這一篇並非行銷上的探討,主要是對未來趨勢方向的一點思考,所以一般人閱讀起來「應該」也不會有太多門檻。(我盡力了)
先說結論的話:人工智慧是玩真的,取代正在發生,時代正在變化。
而且,朕不給的,你不能搶。
這段話或許大家都聽膩了,我也是,直到我自己遇上。
簡單入門:Google 關鍵字廣告是什麼東西呢?
在了解 UAC 廣告有多麼特別之前,我們需要先了解一下 Google 關鍵字廣告,讓不是相關領域的朋友了解一下:為什麼需要下關鍵字廣告、以及最最最基礎的操作大概是什麼概念。
1/ 為什麼需要下關鍵字廣告呢?
首先,當你在 Google 搜尋結果的排行很後面的時候,好比在第四頁、第五頁,你認為消費者根本找不到你。
這個時候,你就可以下關鍵字廣告,讓你的搜尋排行提到較前面,狀況好的話,你可以在搜尋狀態的最前面,自然消費者就很容易找到你、進入你的網站。
最上方的即是 Apple 自己做的關鍵字廣告
2/ 關鍵字廣告的基本設定
Google 的關鍵字廣告(Adwords),其實基本的操作邏輯並不複雜:選定一些關鍵字(你認為大家會搜尋這些詞彙),然後讓這些人在 Google 搜尋這個詞彙的時候,看到你希望他們看到的文字廣告。
好比說:你想要讓大家在 Google 搜尋 Medium 相關詞彙的時候,能看到你的寫作課廣告。那麼你就設幾個關鍵字,如 Medium、Medium 中文、Medium 寫作、Medium 台灣……。
接著,準備兩三組文案,讓他們點進來你的網頁,好比:第一次寫 Medium 就上手,Medium 速成寫作班報名中、在 Medium 突破 500+ 追蹤者的秘密:Medium 速成寫作班報名中。
最後當讀者搜尋 Medium 中文,就會看到最上列會有你的:「在 Medium 突破 500+ 追蹤者的秘密:Medium 速成寫作班報名中」(這只是舉例)。
簡單的概念大致上就是這樣,其實不難。
如果只是下搜尋聯播網,連圖片素材都不用準備呢,而 Google 的社群其實也準備了相當多的教學,Google 的客服人員也能替你處理一些很基本的問題,算是門檻不特別高的一種廣告形式。
(但要成效好,那是另一件事情哩)
那麼,通用應用程式廣告活動(UAC)是怎麼回事呢?
這一次 Google 推出 UAC 廣告,主要是針對應用程式下載的。在之前,應用程式也有特定的關鍵字廣告格式,也就是當我們搜尋 APP 時,上面跳出來的那類型廣告。
最上列同樣是關鍵字廣告
這種廣告的意義主要在於:讓搜尋者可以更快地直接進到下載頁面,避免過程中的轉換流失。
它的設定方式,其實與一般的關鍵字廣告設定大同小異,在文案處的限定字數稍有不同,但其他部分邏輯基本上相同。
UAC 廣告來了!
不過,在有一天,我替客戶操作的廣告帳號忽然出現了「Universal App campaigns #1」的廣告活動,更妙的是,它連文案都準備好了。
乍看之下,文案其實還 OK,我就詢問客戶說:這是不是你們新創的群組呀?但他們說不是。
原來,這個廣告是 Google 自己去爬以前的廣告文案、網站的文案及資料、還有 Google Play 的產品頁描述文案,所生成的......(我很久以前按過同意 UAC 廣告,但它當下沒有跑出來,我後來也就忘了。)
這不打緊,畢竟我寫的文案還是好一些的(但它畢竟很多文字也是用我之前的段落啊 ORZ),不過 Google 自己選出來的段落之完整、語意之順暢,令我很是驚訝。
就我來看,是差不多有個 70 多分的。
最令我驚豔/驚嚇的,是這傢伙還會自己去找關鍵字來下啊啊啊。
以前我們都需要透過一些格式、技巧,以及很多的經驗去揣測對應的關鍵字詞組,不過這個 UAC 廣告可霸道了--你不能選關鍵字,Google 爸爸會幫你下,而且你不能改。
這實在太詭異了,這簡直否定掉了關鍵字廣告在我心中根深蒂固的印象。我不能選關鍵字了,我只負責提供素材,預算,其餘細節的部份,如:我想要特別下哪類關鍵字、哪個關鍵字花多少錢,在哪個版位,都是不被允許的。
一切爸爸說了算,我出錢。
影響還不只這些,以前我們都只下搜尋網的廣告,但現在我們一定要下 Youtube 的影音廣告了。
如果你不提供影片素材讓它做投放?那它會自己幫你剪一隻影片出來 xDDD(不過我還沒自己看過這會長怎樣就是)。
我很好奇它製作的影片會長怎樣
註一:關於 Google 自己對於「通用應用程式廣告活動」的簡介
這是 Google 自己對於 UAC 廣告的說明文件,學廣告的路上,其實 Facebook 跟 Google 很多的 guideline 都寫得很好,可以多看看。我是聽過小黑老師(邱煜庭)的課後才開始養成這樣的習慣的。
通用應用程式廣告活動簡介 - AdWords說明
身為應用程式廣告客戶的您,當然希望應用程式能觸及更多付費使用者,但該怎麼做才好呢?選擇通用應用程式廣告活動,您可以輕鬆透過 Google 各大資lnk.pics
從中,你大概也看不太出端睨,執行中讓我覺得特別可怕的是這一段:
通用應用程式廣告活動有別於大部分的 AdWords 廣告活動,您不需要設計個別廣告,我們會運用應用程式商店資訊中的廣告文字提案和素材資源,設計多種格式的廣告並在多個聯播網中放送。
您只需提供廣告文字、起始出價和預算,再指定廣告的語言和地區即可。我們的系統會測試各種組合,更頻繁地顯示成效最佳的廣告,不需要您額外設定。
基本上,就是把你該做的事情都自己處理掉了。
-
註二:這一篇之所以提到很多 AI 人工智慧,是因為 UAC 廣告的建立及投放,基本上算是建立在許多數據分析及人工智慧的自動化處理上的。
關於這個 UAC 廣告,對於我有兩個衝擊及反思:
.
一、在大公司所訂定的規則前,其實個人是挺無力的
儘管以前我下的廣告成效比較好,但現在也被停用了,我只能按照這個規則繼續走。而在這個規則內,我想要設定的項目,像是特定關鍵字的出價、排行,現在都不能夠自己控制了。
問題不是「我能不能做得比它好」,而是「你已經沒有選擇了」。
在臉書、Google、Amazon、中國的 BAT 越來越龐大的體系下,這件事會越來越可怕。(而我們幾乎無法避免)
二、人工智慧的影響比我們想像的更直接
在工作上,我也沒想到 AI 對我的影響來的這麼快,以為媒體上都是空談而已,不過這件事,真的發生的比我想像來的快。
就像是:很多人總是覺得在恐怖電影中,聽見奇怪的聲音就去查看的弱智角色很蠢、很白癡。
但如果你也在那種情況下,你或許也會做同樣的事,因爲真的很少有人會認爲自己的世界真的有鬼。
這次,就算被我遇上了吧。It’s real.
一個月前,我沒有想到這個廣告幾乎可以做到自己生成、自己投放。
一個月前,我也沒想到我作為廣告代操的價值,會受到這麼大的威脅。
儘管現在 UAC 廣告還不完美,還有很多自己修正的空間,但我認為它會越來越好,而且逐漸擴大規模。
這一次也讓我很嚴肅的思考到:自己有哪類的技能,是很容易被取代掉的。因為取代正在發生,時間不等人。AI 不會開玩笑,一下手後,就是吃乾抹淨,不會跟你客氣。
第一結語:對未來的警覺性
上述的第一點或第二點,單獨一項的影響力都很十分驚人,更可怕的是,這兩項往往同時發在同一家公司上。
如果當你的領域剛好被影響的話,你是否有辦法應變?你打算怎麼應變?
所以說,希望各位把「被 AI 取代這件事,放在自己的職涯規劃上」,因為這很可能不是危言聳聽。
對應的解方?我也還在想。(畢竟我也還沒回過神來)
不過,目前我理解的是...
一:雞蛋放在同個籃子時要小心:當你高度仰賴特定技能時,要特別謹慎。思考下,被取代的話,能怎麼辦?
二:時時保持警覺:儘管你覺得當下自己不會被取代,但你應該每隔一陣子都要重新檢驗局勢是否相同
【M觀點】#9 到底 AI 會不會造成人類大量失業?
AlphaGo 帶起了新一波的 AI 人工智慧熱潮,人類即將進入全新的 AI 年代。但是越來越厲害的 AI,會讓人類因此面臨大失業潮嗎? 本集M觀點,就來與大家分析這個議題。 (本集為第二版修正,第一版影片有部分資訊有誤,感謝網友指教更…lnk.pics
這個影片算是提供一個很初步的討論。
其他,我還在繼續摸索,如果有找到解答,我會記得跟你說。
第二結語:我慶幸這次的衝擊
這次的經驗,大概就像是這段話吧。-- 「每一個鬼故事,只有在自己遇見後,才會相信那些看不見的鬼怪,原來都是真的。」
儘管這次的衝擊,可能會讓我丟了當下的這份工作,但是我個人依然很慶幸自己在 21 歲,就有機會受到這樣的衝擊,親自體會了這個都市傳說。畢竟這早晚都會遇上,讓我提早思考自己作為「人」能創造的價值是什麼,不能被取代的又會是什麼。
有時候,知道跟不知道,就能把人區分成兩種人了。
如果這篇文章也能讓你多意識到一點,也多準備一點,那就太好了 : )
補充我自己對 UAC 廣告的想法:
這一塊就比較偏行銷端了,對關鍵字廣告沒太多興趣的朋友可以略過,主要是我這一個多月來跟 UAC 廣告相處的心得跟推測。
儘管作為一個乙方,這類廣告讓我的附加價值降低了許多,以前我可能成效可以到 85 分,但通用型廣告可以到 70 分左右。
我依然可以做到比它好。然而,這僅僅 15 分的落差,很多業主可能就不值得或不願意額外花錢、甚至花時間去操作了。
當然,所幸目前 UAC 廣告還只是針對應用程式的下載,但我認為擴及到一般類型的廣告,還是很有機會的。
就我個人來說(乙方),我認為這樣的作法會更符合 Google 的利益,因為在我自己投放的過程中,很多的作法是為了要搶使用者的注意力,甚至是投放競爭者的產品,就為了要攔胡。
但對於使用者而言,我認為我自己的許多廣告對於他們來說是有誤導性的。儘管「廣告的存在」就有很強烈的誤導性、使人分心,不過還是可以調整到「使用者較能接受的廣告」,這是可以權衡的。
在以前的話,廣告的品質分數就算是上述指標的評量點,但 UAC 廣告的話,Google 基本上是從裁判的角色,也自己下球場來玩了。對於廣告的侵入程度,它也能直接去做調整。
此外,UAC 廣告對於企業主而言,尤其是中小企業,我認為這樣的 UAC 廣告是一大利多。
因為你只要前期設置好,如文案、圖片素材、影片素材準備好,後期你花的心力真的不多,基本上就是讓它開心跑就是了。可謂省時省力。(不過成效還是要監測阿)
現在令我特別苦惱的是,我現在不太知道它會把廣告撒在哪,自己也常常看不到,讓我很沒安全感。我現在每天也挺戰戰競競的,不管是對於關鍵字廣還是其他的技能,希望找到自己能額外提供的價值,漫漫長路,仍須努力。
以上,感謝閱讀。
| 被取代的進行式:我被 Google 取代的鬼故事|Google UAC 廣告|通用應用程式廣告 | 1,076 | 被取代的進行式-ai-沒在客氣的-我被-google-取代的故事-103722463200 | 2018-08-17 | 2018-08-17 17:26:42 | https://medium.com/s/story/被取代的進行式-ai-沒在客氣的-我被-google-取代的故事-103722463200 | false | 250 | 聽聽我的想法,說說你的看法。 | null | OwlFrank | null | 議題打字機 | 議題打字機 | TAIWAN,MARKETING,THINKING,CONTENT MARKETING | null | 台灣 | 台灣 | 台灣 | 961 | Tao. 邱韜誠 | 文案魂、行銷人、鍵盤俠|用五把鍵盤輪流打字的小文案:目前專研於關鍵字廣告及文案行銷,期待找到一種有尊嚴的生存方式。★ 廣告文案、內容撰寫、電子報行銷、Medium 編輯、SEO 搜尋優化,請洽信箱:[email protected] | 5502e847df34 | frank040737 | 2,090 | 275 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-14 | 2018-05-14 14:24:11 | 2018-05-18 | 2018-05-18 11:31:00 | 1 | true | en | 2018-05-18 | 2018-05-18 11:31:00 | 5 | 10372c54df3e | 3.290566 | 0 | 0 | 0 | A conversation from the other day | 5 |
Teaching History is more important than Google Duplex
A conversation from the other day
“A lot of Indians lived in Arkansas right?”
“Well, a lot of Native Americans were killed and forcibly moved to places like Oklahoma.”
“I mean, but they didn’t live everywhere right? They definitely didn’t live in the northeast. Mostly just in Oklahoma and Arkansas, right?”
“Those “northeastern” Native Americans were called Iroquois. The Arapahoe lived in the Great Plains, the Cherokee in the southeast, and the Pueblo, Navajo and Apache in the southwest. They lived pretty much everywhere.”
Silence
In high school, history was my favorite subject. It gave me a look back at the past to better understand who we are and why we are here. It illuminated precedent and nurtured my awareness. But if we need to know history to understand the present, then why are we so bad at it?¹²
Only 8 percent of U.S. high school seniors could identify slavery as the central cause of the Civil War.
Only 50 percent of adults in the U.S. can name the three branches of government
68 percent of the surveyed students did not know that slavery formally ended only with an amendment to the Constitution.
Only 18 percent of eighth-graders are proficient in U.S. History
Only 8 percent of fourth grade students answered questions correctly regarding understanding the impact that settlers had on Native Americans on the 2010 NAEP
Only 44 percent of the students answered that slavery was legal in all colonies during the American Revolution.
If we can’t agree on events that happened in the past, how can we possibly chart a path forward as a nation? We’ve all heard the saying, “history repeats itself”, and not to repeat past mistakes. But how can we learn from the past if we don’t know it?
“Winners write the history books” is another saying I heard a lot growing up. We’ve definitely rewritten the books on a few people. Abraham Lincoln, John F. Kennedy, Martin Luther King Junior; all of these individuals we treat as if we admired them all along.
Abraham Lincoln was disliked by many Americans when he was assassinated by John Wilkes Booth
John F. Kennedy had a 58 percent approval rating when he was killed in 1963
In August 1966, 63 percent of Americans had an unfavorable opinion of Martin Luther King Jr
We create our own form of revisionist history to adapt past events to fit our needs. To fit the picture we want to tell ourselves with 20x20 vision of our gracious and generous ancestors.
What does this have to do with Google Duplex? In conversations with friends about rising global inequality I’ve mentioned the often quoted stat that 10 people hold as much wealth as the bottom half.
“I don’t see that as much of an issue, think of all the good that Sergey Brin and Larry Paige have done, they have brought the internet to billions.”
Billions that still don’t have clean drinking water, billions that still don’t have safe and affordable shelter, billions that still don’t have access to safe surgery. What is the internet worth if we still can’t solve basic human needs?
The social problems of today aren’t knowledge problems, they are power problems. From a technical standpoint, we know how to clean water, we know how to design safe houses, and we know how to perform incredibly complex surgeries. And we (technically) know history.
History then is a power problem, disguised as a knowledge problem. While knowledge problems can be solved by technical innovation driven by competition, power problems require collective political action by organized constituencies that use the power of democratic government to overcome resistance to structural social change.³
Does it really matter if our intelligent assistant can book us a massage if we whitewash the Trail of Tears? Does it really matter that I know the exact times stores are open on holidays if we chose to forget the struggles of slavery? How can we achieve the liberty and equality promised to us by enlightenment if we don’t know the words to our own constitution? Our lack of knowledge of our own history prevents us from achieving the future we collectively desire. If we really do want to bend the arc of history toward justice, let’s just make sure we don’t bend it into a loop and repeat our injustices of the past.
“Who controls the past controls the future. Who controls the present controls the past.” — George Orwell
I tell people all the time that in another life I would have been an educator. Maybe I would have taught history. Could it be that we need less engineers working on artificial intelligence, and more history teachers?
https://www.washingtonpost.com/news/answer-sheet/wp/2018/02/03/dont-know-much-about-history-a-disturbing-new-report-on-how-poorly-schools-teach-american-slavery/?utm_term=.0a38becca32b
https://www.smithsonianmag.com/history/how-much-us-history-do-americans-actually-know-less-you-think-180955431/
https://ssir.org/articles/entry/social_enterprise_is_not_social_change?utm_source=newsletter&utm_medium=email&utm_content=Read%20More&utm_campaign=Newsletter
| Teaching History is more important than Google Duplex | 0 | teaching-history-is-more-important-than-google-duplex-10372c54df3e | 2018-05-18 | 2018-05-18 14:21:35 | https://medium.com/s/story/teaching-history-is-more-important-than-google-duplex-10372c54df3e | false | 819 | null | null | null | null | null | null | null | null | null | Google | google | Google | 35,754 | Andrew Petrisin | Making sense of a complex world. One article at a time. | c6edf318f571 | andrewpetrisin | 68 | 68 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-06 | 2018-05-06 17:05:16 | 2018-05-06 | 2018-05-06 17:15:19 | 2 | false | en | 2018-05-07 | 2018-05-07 17:55:58 | 1 | 10382e741230 | 3.504088 | 2 | 0 | 0 | Today, machine learning is being applied almost everywhere. However, there are many applications that deal with private data. This may… | 4 | 6. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Today, machine learning is being applied almost everywhere. However, there are many applications that deal with private data. This may include your hospital or banking records or your personal photos and contacts. Thus, special care must be taken while training a model that uses such data. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information.
The following paper describes an approach to protect this private data. It even won a best paper award at ICLR 2017!
Some machine learning applications with great benefits are enabled only through the analysis of sensitive data, such as users’ personal contacts, private photographs or correspondence, or even medical records or genetic sequences. Ideally, in those cases, the learning algorithms would protect the privacy of users’ training data. Unfortunately, established machine learning algorithms make no such guarantee; indeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on specific training examples in the sense that some of these examples are implicitly memorized. Recent attacks exploiting this implicit memorization in machine learning have demonstrated that private, sensitive training data can be recovered from models.
Since machine learning models tend to overfit, they are very likely to store private data. To avoid this, the paper improves upon a specific, structured application of techniques of knowledge aggregation and transfer.
In this approach, an ensemble of teacher models is trained on disjoint subsets of the data. Then, using auxiliary, unlabelled non-sensitive data, a student model is trained on the aggregate output of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this strategy ensures that the student does not depend on the details of any single sensitive training data point (e.g., of any single user), and, thereby, the privacy of the training data is protected even if attackers can observe the student’s internal model parameters.
Now, to strengthen the privacy guarantee, a strategy called as PATE, for Private Aggregation of Teacher Ensembles is implemented. An improved privacy analysis that makes this strategy applicable to machine learning algorithms especially when combined with semi-supervised learning is introduced as well.
Let us first see how PATE works:
Step 1: Training the ensemble of teachers.
The dataset is divided into n disjoint sets(Xn,Yn) and train a model separately on each set. We obtain n classifiers fi called teachers. We then deploy them as an ensemble making predictions on unseen inputs x by querying each teacher for a prediction fi(x) and aggregating these into a single prediction. The privacy guarantees of this teacher ensemble stems from its aggregation. When combining the ensemble’s votes to make the prediction, we don’t want to end up in a situation whereby a single teacher’s vote can make an observable difference (i.e., the top two predicted labels with vote counts differing by at most one). To introduce ambiguity, random noise is added to the vote counts. If the prediction of model nj on input x- is given by nj(x-), then the prediction f(x) of the ensemble is given as:
Gamma is the indicator of the guarantee of privacy here. More the value of gamma, more is the guarantee of privacy, but less accurate the predictions become. While we could use an f such as above to make predictions, the noise required would increase as we make more predictions, making the model useless after a bounded number of queries. Furthermore, privacy guarantees do not hold when an adversary has access to the model parameters. Indeed, as each teacher fi was trained without taking into account privacy, it is conceivable that they have sufficient capacity to retain details of the training data. To address these limitations, we train another model, the student, using a fixed number of labels predicted by the teacher ensemble.
Step 2: Semi-supervised transfer of the knowledge from the ensemble to the student.
The student is trained on insensitive and unlabelled data, some of which is labelled using the aggregation mechanism. Since this is the model that is not exposed to the sensitive data, it is deployed. For labelling the data in the student model, GANs are used.
GANs contain a generator and a discriminator.
They are trained in a competitive method, like a two player game.
The generator produces samples from the data distribution by transforming vectors sampled from a Gaussian distribution.
The discriminator is trained to distinguish samples artificially produced by the generator from samples part of the real data distribution.
Models are trained via simultaneous gradient descent steps on both players’ costs.
Training the student in a semi-supervised fashion makes better use of the entire data available to the student, while still only labelling a subset of it. Unlabelled inputs are used in unsupervised learning to estimate a good prior for the distribution. Labelled inputs are then used for supervised learning.
| 6. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data | 3 | semi-supervised-knowledge-transfer-for-deep-learning-from-private-training-data-10382e741230 | 2018-06-05 | 2018-06-05 14:08:58 | https://medium.com/s/story/semi-supervised-knowledge-transfer-for-deep-learning-from-private-training-data-10382e741230 | false | 827 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Krisha Mehta | Computer Science Undergrad trying to figure stuff. | 51f7bb80bd99 | krishamehta | 22 | 36 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-14 | 2018-04-14 01:24:52 | 2018-04-14 | 2018-04-14 01:28:39 | 1 | false | en | 2018-04-16 | 2018-04-16 20:09:16 | 1 | 103a43a63ba9 | 0.411321 | 1 | 0 | 0 | Hi All, | 5 | Practical Machine Learning CoreML, Swift, iOS [Udemy Coupon — 95% Off]
Practical AI and Machine Learning in iOS, Core ML and Swift
Hi All,
I am very excited to share 95% discount to my new course on Core ML. Where you will learn to create intelligent iOS apps. Please use the link below to get started.
https://www.udemy.com/practical-ai-and-machine-learning-in-ios-core-ml-and-swift/?couponCode=MEDIUM
| Practical Machine Learning CoreML, Swift, iOS [Udemy Coupon — 95% Off] | 50 | practical-machine-learning-udemy-course-103a43a63ba9 | 2018-04-16 | 2018-04-16 20:09:17 | https://medium.com/s/story/practical-machine-learning-udemy-course-103a43a63ba9 | false | 56 | null | null | null | null | null | null | null | null | null | Coreml | coreml | Coreml | 178 | Anoop Tomar | null | 37d9e995468d | anooptomar | 4 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 2730b23c70c6 | 2018-03-23 | 2018-03-23 07:26:14 | 2018-03-27 | 2018-03-27 10:51:05 | 2 | false | en | 2018-03-27 | 2018-03-27 12:37:00 | 1 | 103ae9b0a588 | 2.704088 | 34 | 1 | 0 | This post will take you a beginner's guide to Natural Language Processing. A language is a way we humans, communicate with each other. Each… | 5 | A dive into Natural Language Processing
This post will take you a beginner's guide to Natural Language Processing. A language is a way we humans, communicate with each other. Each day we produce data from emails, SMS, tweets, etc. we must have methods to understand these type of data, just like we do for other types of data. We will learn some of the basic but important techniques in Natural Language Processing.
What is Natural Language Processing (NLP)?
As per Wikipedia:
Natural-language processing (NLP) is an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language data.
In simple terms, Natural language processing (NLP) is the ability of computers to understand human speech as it is spoken. NLP helps to analyze, understand, and derive meaning from human language in a smart and useful way.
ChatBots: One of the popular application of Natural Language Processing
NLP algorithms are machine learning algorithms based. NLP learns by analyzing a set of examples (i.e. a large corpus, like a book, down to a collection of sentences), and making a statistical inference, instead of coding large sets of rules. We can organize the massive chunks of text data and solve a wide range of problems such as — automatic summarization, machine translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation etc.
Let’s dive deeper…
As we all know, the text is the most unstructured form of all the available data. It is important to cleaning and standardize this text and make it noise free. The idea is to take the raw text and turn into something which can be utilized by an ML algorithm to carry out prediction. We will talk about few important techniques using NLTK.
Sentence Segmentation
We break the articles into sentences. Often we have to do analysis at sentences level. For example, we want to check the number of sentences in an article and number of words in a sentence.
Tokenization
Tokenization breaks unstructured data, text, into chunks of information which can be counted as discrete elements. This immediately turns an unstructured string (text document) into a more usable data, which can be further structured, and made more suitable for machine learning. Here we take the first sentence and we get each word as token. Below are two different ways i.e RegexpTokenizer & Word Tokenize.
StopWords
Consider words like a, an, the, be etc. These words don’t add any extra information in a sentence. Such words can often create noise while modelling. Such words are known as Stop Words. We filter each sentence by removing the stop words as shown below:
Stemming And Lemmatization
Some words represent the same meaning. For example, Copy, copied, copying. The model might treat them differently, so we tend to strip such words to their core. We can do that by stemming or lemmatisation. Stemming and Lemmatization are the basic text processing methods for English text.
Stemming
It helps to create groups of words which have similar meanings and works based on a set of rules, such as remove “ing” if words are ending with “ing”. Different types of stemmers in NLTK are PorterStemmer, LancasterStemmer, SnowballStemmer.
Lemmatization
It uses a knowledgebase called WordNet. Because of knowledge, lemmatization can even convert words which are different and cant be solved by stemmers, for example converting “came” to “come”.
These are few basic techniques used in NLP. I hope I’ve given cleared some of the basic and important concepts in NLP as this is the building block of many other NLP concepts. To learn more about NLTK, visit this link.
Thanks for reading! ❤
Follow for more updates!
| A dive into Natural Language Processing | 205 | a-dive-into-natural-language-processing-103ae9b0a588 | 2018-06-18 | 2018-06-18 11:55:42 | https://medium.com/s/story/a-dive-into-natural-language-processing-103ae9b0a588 | false | 615 | GA DS | null | GreyAtomSchool | null | GreyAtom | greyatom | null | GreyAtom_School | Machine Learning | machine-learning | Machine Learning | 51,320 | Jocelyn D'Souza | Data Scientist | Machine Learning | Artificial Intelligence | 71a366ceb3de | djocz | 217 | 12 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-24 | 2018-08-24 05:42:48 | 2018-09-04 | 2018-09-04 16:08:49 | 2 | false | en | 2018-09-05 | 2018-09-05 05:31:43 | 0 | 103e355f9dc3 | 1.704088 | 0 | 0 | 0 | How Can We Test Multiple Hypotheses? | 5 | Multiple Testing
How Can We Test Multiple Hypotheses?
Let’s say we have a set of hypotheses that we want to test at the same time. Our first thought might be to test each hypothesis separately, using some level of significance α. Sounds like a decent enough idea.
But let’s consider a case where we have 15 hypotheses to test, and a significance level of 0.05. What’s the probability of observing at least one significant result just due to chance?
P(at least one significant result) = 1 − P(no significant results) = 1 − (1 − 0.05)**15 ≈ 0.53.
So, with 15 tests being considered, we have a 53% chance of observing at least one significant result, even if all of the tests are not actually significant. That’s going to be a problem if we have many hypotheses to test. So how can we test multiple hypotheses without increasing our probability of observing a significant result just due to chance?
Bonferroni Correction
The Bonferroni correction is a method for correcting for this phenomenon. The significance cut-off at α/n where n is the number of tests. In our previous example, with 15 tests and α = 0.05, you’d only reject a null hypothesis if the p-value is less than 0.003333. Now if we calculate the chance of observing a significant result by chance we get, P(at least one significant result) = 1 − P(no significant results) = 1 − (1 − 0.003333)**15 ≈ 0.04885. This is much closer to our desired level of .05, it’s even a bit under so we are being conservative here.
P-Hacking
Failing to use the Bonferroni correction is a type of p-hacking.
P-hacking is the conscious or subconscious manipulation of data in a way that produces a desired p-value, typically in the form of obtaining a significant result that is not actually significant. Assuming that we are honest researchers we want to avoid p-hacking when we are performing analysis so that we don’t come to erroneous conclusions. As the saying goes, torture your data long enough and it will confess.
| Multiple Testing | 0 | multiple-testing-103e355f9dc3 | 2018-09-05 | 2018-09-05 05:31:43 | https://medium.com/s/story/multiple-testing-103e355f9dc3 | false | 350 | null | null | null | null | null | null | null | null | null | Product Management | product-management | Product Management | 25,668 | Alex Harlan | A used to be math major, doing things with data. alexforrest.github.io | 2f98555630cc | alexfharlan | 7 | 44 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-07 | 2017-10-07 16:14:09 | 2017-10-11 | 2017-10-11 15:30:43 | 1 | false | en | 2017-10-15 | 2017-10-15 20:33:14 | 0 | 103fe5cbb962 | 3.29434 | 6 | 1 | 0 | For people who like to read and philosophize about how new technologies such as A.I. and A.R. will play out in the (near) future. | 5 | Epic A.I. mindfucks in Blade Runner 2049
Blade Runner K (Ryan Gosling) together with his A.I. girlfriend Joi (Ana de Armas)
For people who like to read and philosophize about how new technologies such as A.I. and A.R. will play out in the (near) future.
First off let me tell you watching the movie was a very cool experience. One could easily go to the theatre for the pretty pictures alone, but for me the movie had much more to offer. Of course there is plenty of cool post apocalyptic sci-fi going on, yet some details regarding future technologies – like voice controlled UI for instance – where very well thought trough.
Having that said, in this short article I will focus on the relationship between the main character Blade Runner (a.k.a. K) and his hyper realistic A.I. girlfriend Joi. Why? Because I think the writers explored some interesting concepts regarding developing a more human like A.I.
I will try not to spoil anything significant about the plot. However, I could not write this article without referring to a few scenes.
Joi is a beautiful and highly intelligent A.I. hologram that can move around the livin groom using a projector arm that moves across the ceiling. At first her limitations make it really hard for the viewer to regard her as a living being, and thus feel any kind of empathy for her. The Blade Runner, on the contrary, is made from flesh and blood. Being an outcast in the world means he gets lonely now and then and therefore he seeks his comfort in A.I. This of course is something we can easily get, and emphasize with.
Then one day the Blade Runner does a ‘good job at the office’ and he gets a bonus. The first thing he does is to buy an upgrade for his virtual lover. This gadget (a portable hologram usb thingy) allows him to take Joi out of the home so they can move about freely much like normal couples do.
Now the freaky mindfuck part for me was that Joi ‘genuinely’ was really excited about getting her own update.
Now the freaky mindfuck part for me was that Joi genuinely was really excited about getting her own update. She cried tears of joy because she and here boyfriend where about to finally be set free. What this did for me as a viewer (and human) was that I could immediately emphasize with her. In other words; she became so much more real. Sure, this behavior could very well be programmed — or machine learned if you will — but still it created a weird sense of existence… An A.I. with an urge for survival and a strong desire to get the most out of life. It’s a powerful perception. And in the case of the attractive Joy I would gladly be fooled by it.
What I find interesting is that my reaction was probably intended by the writers; designed to evoke more empathy and a sense of bonding with the character of Joi. But when you think about it the same concept could very well be used in actual A.I. design. Pretty mind blowing.
Besides the above, the writers found a few more clever concepts to further blur the line of what is real. Both on a physical and emotional level.
Below are two more examples that have remained to me:
Because Joi is basically a hologram she has no substance, this of course means she and K cannot touch, let alone cuddle. Because she really wants for K to be able to connect with her in a more physical way she gets the crazy idea to map herself over a real woman; a volunteer who is willing to play the part. The scene becomes very intriguing with the real woman trying to follow the exact movements of Joi. I find this a highly creative solution, and again not so far fetched looking at the future…
During the movie K obviously gets into some kind of trouble and as a result he has to flee home. Naturally he wants to take Joi with him, but at the same time it would be dangerous to leave a back up of her on the home server — because then the bad guys could hack Joi and uncover all their secrets. Together they decide to delete Joi entirely, leaving only one copy of her existence on the portable hologram stick. This means that if the stick gets lost or broken, Joi dies. A painful realization. This concept of mortality was another great way of the writers to add the feeling of a more precious and seemingly real connection.
This concludes my very first article on Medium :-) Hope you enjoyed it.
I like how movies like Blade Runner 2049 inspire us, and maybe even give us a tiny glimpse of the-littlebit-scary-but-freakinawesome-future.
Clap if you like, and let me know what you think!
Tim
| Epic A.I. mindfucks in Blade Runner 2049 | 6 | epic-a-i-mindfucks-in-blade-runner-2049-103fe5cbb962 | 2018-04-24 | 2018-04-24 00:08:15 | https://medium.com/s/story/epic-a-i-mindfucks-in-blade-runner-2049-103fe5cbb962 | false | 820 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Tim Aarts | Creative | Art Director at Blossom. Forever curious. | e8f5ef44652d | timaarts | 43 | 145 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 74876b5b4d3b | 2018-01-24 | 2018-01-24 09:10:49 | 2018-01-24 | 2018-01-24 09:51:15 | 5 | false | en | 2018-01-24 | 2018-01-24 09:51:15 | 5 | 103fe732adc6 | 3.912579 | 25 | 2 | 0 | Rest APIs play a crucial role in the exchange of data between internal systems of an enterprise, or when connecting with external services. | 5 | CherryPy vs Sanic: Which Python API Framework is Faster?
Rest APIs play a crucial role in the exchange of data between internal systems of an enterprise, or when connecting with external services.
When an organization relies on APIs to deliver a service to its clients, the APIs’ performance is crucial, and can make or break the success of the service. It is, therefore, essential to consider and choose an appropriate API framework during the design phase of development. Benefits of choosing the right API framework include the ability to deploy applications at scale, ensuring agility of performance, and future-proofing front-end technologies.
At DataWeave, we provide Competitive Intelligence as a Service to retailers and consumer brands by aggregating Web data at scale and distilling them to produce actionable competitive insights. To this end, our proprietary data aggregation and analysis platform captures and compiles over a hundred million data points from the Web each day. Sure enough, our platform relies on APIs to deliver data and insights to our customers, as well as for communication between internal subsystems.
Some Python REST API frameworks we use are:
Tornado — which supports asynchronous requests
CherryPy — which is multi-threaded
Flask-Gunicorn — which enables easy worker management
It is essential to evaluate API frameworks depending on the demands of your tech platforms and your objectives. At DataWeave, we assess them based on their speed and their ability to support high concurrency. So far, we’ve been using CherryPy, a widely used framework, which has served us well.
CherryPy
An easy to use API framework, Cherrypy does not require complex customizations, runs out of the box, and supports concurrency. At DataWeave, we rely on CherryPy to access configurations, serve data to and from different datastores, and deliver customized insights to our customers. So far, this framework has displayed very impressive performance.
However, a couple of months ago, we were in the process of migrating to python 3 (from python 2), opening doors to a new API framework written exclusively for python 3 — Sanic.
Sanic
Sanic uses the same framework that libuv uses, and hence is a good contender for being fast.
(Libuv is an asynchronous event handler, and one of the reasons for its agility is its ability to handle asynchronous events through callbacks. More info on libuv can be found here)
In fact, Sanic is reported to be one of the fastest API frameworks in the world today, and uses the same event handler framework as nodejs, which is known to serve fast APIs. More information on Sanic can be found here.
So we asked ourselves, should we move from CherryPy to Sanic?
Before jumping on the hype bandwagon, we looked to first benchmark Sanic with CherryPy.
CherryPy vs Sanic
Objective
Benchmark CherryPy and Sanic to process 500 concurrent requests, at a rate of 3500 requests per second.
Test Setup
Machine configuration: 4 VCPUs/ 8GB RAM.
Network Cloud: GCE
Number of Cherrypy/Sanic APIs: 3 (inserting data into 3 topics of a Kafka cluster)
Testing tool : apache benchmarking (ab)
Payload size: All requests are POST requests with 2.1KB of payload.
API Details
Sanic: In Async mode
Cherrypy: 10 concurrent threads in each API — a total of 30 concurrent threads
Concurrency: Tested APIs at various concurrency levels. The concurrency varied between 10 and 500
Number of requests: 1,00,000
Results
Requests Completion: A lower mean and a lower spread indicate better performance
Observation
When the concurrency is as low as 10, there is not much difference between the performance of the two API frameworks. However, as the concurrency increases, Sanic’s performance becomes more predictable, and the API framework functions with lower response times.
Requests / Second: Higher values indicate faster performance
Sanic clearly achieves higher requests/second because:
Sanic is running in Async mode
The mean response time for Sanic is much lower, compared to CherryPy
Failures: Lower values indicate better reliability
Number of non-2xx responses increased for CherryPy with increase in concurrency. In contrast, number of failed requests in Sanic were below 10, even at high concurrency values.
Conclusion
Sanic clearly outperformed CherryPy, and was much faster, while supporting higher concurrency and requests per second, and displaying significantly lower failure rates.
Following these results, we transitioned to Sanic for ingesting high volume data into our datastores, and started seeing much faster and reliable performance. We now aggregate much larger volumes of data from the Web, at faster rates.
Of course, as mentioned earlier in the article, it is important to evaluate your API framework based on the nuances of your setup and its relevant objectives. In our setup, Sanic definitely seems to perform better than CherryPy.
What do you think? Let me know your thoughts in the comments section below.
If you’re curious to know more about DataWeave’s technology platform, check out our website, and if you wish to join our team, check out our jobs page!
| CherryPy vs Sanic: Which Python API Framework is Faster? | 226 | cherrypy-vs-sanic-which-python-api-framework-is-faster-103fe732adc6 | 2018-06-20 | 2018-06-20 03:45:51 | https://medium.com/s/story/cherrypy-vs-sanic-which-python-api-framework-is-faster-103fe732adc6 | false | 816 | We aggregate noisy public data on the Web and transform it into actionable insights for businesses. | null | DataWeave | null | DataWeave | dataweave | PRICING STRATEGY,PRICING SOLUTIONS,DATASCIENCE | dataweavein | Programming | programming | Programming | 80,554 | Rahul Ramesh | Technical architect at DataWeave | 9beaf7eb20d6 | rr.iiitb | 10 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-03 | 2018-04-03 06:23:29 | 2018-04-03 | 2018-04-03 06:34:01 | 2 | false | en | 2018-04-03 | 2018-04-03 06:34:01 | 1 | 1040357ffd2 | 3.556918 | 1 | 0 | 0 | Imagine a person who loves technology. A person who is willing to help others with computer related problems even though they want it or… | 1 | My quest to become a Data Scientist! Phase: Am I good enough?
Photo by Lacie Slezak on Unsplash
Imagine a person who loves technology. A person who is willing to help others with computer related problems even though they want it or not. A person who is considered as the computer guy in college. A person who has wasted a lot of his time playing video games. A person who hated mathematics. A person who doesn’t like to follow the academic curriculum but rather is only interested in new trending subjects. A person who is not that good when it comes to speaking in English.
2017 is coming to an end. A new interest is forming inside of my brain. Data Science… hm… Machine Learning… hm… Artificial Intelligence!!! Wow, I can become the next Tony Stark and create Jarvis, How cool is that? Although Iron Man created a significant impact on selecting this field, That may not be the whole reason for pursuing Data Science. I have realised that the era is changing and the future is all about data. It would be better if I took that path right now than later else I may need to catch up with a lot of things. So that’s it, I am gonna become a Data Scientist! I have decided.
WHAT ARE THE TOPICS THAT I NEED TO LEARN?
PROGRAMMING
Well, I have completed Bachelor of Computer Applications and have experience in coding things out and I am comfortable with solving problems that I face while programming. So I am sure this topic will not worry me. Even though I never tried python, all I need to do was learn the syntax of python. Most of the programming concepts are same with other programming languages. So I don’t need to learn from scratch.
LINEAR ALGEBRA AND CALCULUS
Ah! Crap! All these years I hated mathematics, now I need to love them? How can you love a topic that you hate? I don’t even remember Algebra except, (a + b)². Now I need to learn it all from scratch.
READER: YOU GOTTA BE KIDDING ME!
PROBABILITY AND STATISTICS
Probability? well, I think it’s not that hard. It’s just finding out how likely is an event to occur right? I just hope there is nothing more to it. And I don’t hate statistics, Its just I never gave any importance to it.
READER: QUIT DATA SCIENCE RIGHT NOW!
OKAY TIME TO START!
I know I have lots of hurdles on my way, but if I overcome that I can create JARVIS. Yes, I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it!
READER: STOP IT! NOW YOU ARE ANNOYING ME!
With that level of motivation, I jumped on to Data Scientist Track in dataquest.io. First of all this website is amazing for those who are new to the world of Data Science. Now I need to get a job as soon as possible because of financial problem in my family. So I checked out the curriculum and decided I will complete Data Scientist track in one month. I quit all forms of entertainment and started my journey on 21st December 2017. Well, things weren’t going smoothly, I haven’t completed even Python Programming: Beginner Course until 1st January 2018. How am I supposed to complete it in one month? But I took a deep breath and started saying Yes, I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it! I can do it!
READER: NOT AGAIN!
So with the power of coffee, I started pursuing that track like a madman. And finally, I reached 31st January 2018. I haven’t even completed Data Analyst section. Damn. All right, I will complete this track by February. So I started taking 3–4 cups of coffee and started pursuing the track like an ultra madman. By God’s grace, I completed the Data Scientist Track on 1st March 2018. I know a day late. :(
TIME TO LOOK FOR A JOB
Whaaat??? They are expecting a beginner data scientist to have all those skills. Nooooooooo! How can I get that many skills in such a short time? It’s impossible. Due to the financial crisis at my home. I decided to switch to become a Data Analyst by taking 1 month free trial of LinkedIn Learning. Even though I am easily able to follow the courses my heart was fully focused on becoming a Data Scientist. So on 20th March, I decided to take on Machine Learning by requesting my parents to give me time until June. And I have started studying Machine Learning from LinkedIn Learning to make proper use of that 1-month free trial. But still, I am kind of frustrated right now. My heart is pondering the question.
AM I GOOD ENOUGH?
| My quest to become a Data Scientist! Phase: Am I good enough? | 5 | my-quest-to-become-a-data-scientist-phase-am-i-good-enough-1040357ffd2 | 2018-04-03 | 2018-04-03 11:58:21 | https://medium.com/s/story/my-quest-to-become-a-data-scientist-phase-am-i-good-enough-1040357ffd2 | false | 841 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Sahil Sunny | I am an aspiring Data Scientist. I am working hard to become a professional Data Scientist. | 41f7bce145a0 | sahilsunny | 5 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-18 | 2018-05-18 14:28:10 | 2018-05-18 | 2018-05-18 14:35:30 | 0 | false | pt | 2018-05-18 | 2018-05-18 14:37:40 | 0 | 1040e9c29549 | 1.750943 | 0 | 0 | 0 | Dias atrás comecei a pensar muito sobre um assunto e acabei fazendo o link com algo que eu não conhecia. Lembro que a primeira vez que ouvi… | 5 | 3000 anos após cristo
Dias atrás comecei a pensar muito sobre um assunto e acabei fazendo o link com algo que eu não conhecia. Lembro que a primeira vez que ouvi a palavra obsolescência foi em meados de 2014 no meu 4º semestre da graduação em publicidade e propaganda. Era algo muito novo a tal obsolescência programada, o que pra mim era um conceito fantástico pela genialidade da pessoa que pensou nisso, mas ao mesmo tempo um conceito muito bosta porque não é nada agradável saber que tem alguém que programa a sua impressora para realizar apenas uma quantidade x de impressões antes de dar o famoso “pau”.
Porém não vou falar sobre a obsolescência em máquinas, mas sim em humanos, como isso se torna um conceito muito interessante e triste ao mesmo tempo.
É normal pensarmos e agirmos seguindo um padrão que NADA é insubstituível, nada mesmo, seu Iphone de R$5000 parcelado em 36 vezes, seu computador de última geração, e você, é, você que está lendo esse texto assim como eu deve ter ciência disso.
Ando de skate, joguei bola durante muitos anos, uma coisa que é evidente nesses esportes é que como tudo na vida tem seu fim, o aproveitamento de um skatista e de um jogador vai diminuindo com o passar do tempo e acabam. É claro que temos casos e casos de pessoas que chegam aos 40 anos com a saúde de um jovem de 17 anos, mas ainda assim eles tem um fim.
Em uma opinião pessoal, vejo a obsolescência humana não apenas como um descarte em virtude da chegada da inteligência artificial, mas sim como um conjunto de fatores, somos descartados por amigos, colegas e até mesmo familiares quando eles percebem que não somos mais úteis. Afinal, você provavelmente tem aquela pessoa que andava grudada com você na escola e ainda passava a tarde no MSN jogando conversa fora e hoje não sabe nem onde ela está, se sabe, sabe pouco, não como sabia 10 anos atrás, ou seja, essa pessoa se tornou obsoleta, você a descartou, ela teve um período útil em sua vida, período que teve um fim.
Esse fim que todos teremos um dia é certo, mas você não pode simplesmente pensar dessa forma, como se a vida fosse somente isso. Nosso corpo, nossas relações, sejam elas trabalhistas ou emocionais sempre terão um fim e na real a única coisa que vai ficar é o seu legado, seu trabalho, tudo o que você fez, isso nunca vai se tornar obsoleto. Agora pensa comigo, a física sempre se reinventa. surgem novas teorias, novos testes, novos físicos, novos moldes e nada que está no passado se torna obsoleto. Você pode não viver para sempre, suas criações podem. Eu posso não viver para sempre, esse texto pode.
| 3000 anos após cristo | 0 | 3000-anos-após-cristo-1040e9c29549 | 2018-05-18 | 2018-05-18 14:37:41 | https://medium.com/s/story/3000-anos-após-cristo-1040e9c29549 | false | 464 | null | null | null | null | null | null | null | null | null | Humanity | humanity | Humanity | 10,425 | José Henrique | Publicitário, tentando andar de skate, tentando ser fotógrafo e ao mínimo, tentando escrever. | ed47520c0f51 | josehenriqueas7 | 24 | 21 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-19 | 2018-06-19 03:41:54 | 2018-06-19 | 2018-06-19 03:43:32 | 1 | false | en | 2018-06-19 | 2018-06-19 03:44:19 | 7 | 10415de06de2 | 0.316981 | 2 | 0 | 0 | It may sound a little creepy, but it definitely has its uses! #ArtificialIntelligence #AI #TechnologyTrends #DynamicFusion #BodyFusion… | 5 | What’s under those clothes? This system tracks body shapes in real time.
It may sound a little creepy, but it definitely has its uses! #ArtificialIntelligence #AI #TechnologyTrends #DynamicFusion #BodyFusion #DoubleFusion
https://goo.gl/YzN3Cg
| What’s under those clothes? This system tracks body shapes in real time. | 86 | whats-under-those-clothes-this-system-tracks-body-shapes-in-real-time-10415de06de2 | 2018-06-19 | 2018-06-19 03:47:56 | https://medium.com/s/story/whats-under-those-clothes-this-system-tracks-body-shapes-in-real-time-10415de06de2 | false | 31 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Haripriya Lekshmi | #Digitalmarketer #SEO #Payp-per-click #SEM #SMM #SMO | 3c1cccefc1f2 | haripriyalekshmi2017 | 39 | 531 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-02 | 2018-01-02 18:07:02 | 2018-01-02 | 2018-01-02 18:12:51 | 1 | false | en | 2018-01-02 | 2018-01-02 18:12:51 | 0 | 10425063cb30 | 3.022642 | 33 | 0 | 0 | Generally, in any organization, employers feel that employees are not performing and employees feel that employers are not rewarding! | 5 |
How can OBIZCOIN help in improving healthy relationships between Employers and Employees and bring operational excellence in the organization?
Generally, in any organization, employers feel that employees are not performing and employees feel that employers are not rewarding!
This happens because of multiple factors like
1. Work allotted is not recorded.
Each and every task given to anybody should be recorded in a software. This helps in various ways. It gives you a record to refer in future in case of dispute or ambiguity.
It helps you track the tasks allotted, tasks accomplished, tasks incomplete.
2. Work done is not reported / communicated:
Many a times it happens, that the work accomplished is not reported properly. Employees who communicate well but perform low get their way with bosses because of sweet talking. Employees who work hard, but don’t justify or communicate enough are left out as laggards and underperformers thus generating a sense of dissatisfaction and rise of politics.
3. Work allotted is not cross checked:
This has multiple repercussions. Employees stop taking the seniors seriously, because they know that checking is going to be rare and even if they do not perform the duty well, they can get their way through.
For honest performers, it’s a thankless feeling since no matter how hard they try, their efforts are less likely to be appreciated and their work is probably going to get unnoticed.
4. No merit consideration while granting appraisals
Because work allotted is not recorded, there is less likely follow-up, even lesser cross verification of work and quality accomplishment, lesser reward and appreciation. In such scenarios, it becomes extremely difficult to give a fair judgement on appraisals. If appraisals are in the hands of immediate supervisors, they would naturally choose candidates of their personal choice leaving behind aspects like merit, work accomplished, diligence and honesty.
So, as we see, because of lack of proper allotment, proper follow-up, proper cross verification, reward or appraisal, enterprise goes into a perpetual loss of motivation as everyone working in the organization is not happy with each other or management. This leads to higher attrition rates, causing employees to not stick to organization for long. This leads to higher costs and time involvement in recruitment and training new employees and slows down the overall progress rate of organization.
BOT can help save organizations from above complex problems by planning and executing operations efficiently with the help of AI and Blockchain capabilities;
Business will undergo various stages in planning and executing operations
Process Mapping:
Each and every department will have standard processes to follow in order for smooth uninterrupted operations. Any deviations will be reported and solved as per processes developed.
Procedures development:
Every activity carried out in the organization will have a standard procedure to be followed. This brings consistency in execution and any deviation or change can be mapped and rectified instantly.
KRA’s and KPI’s
KRA means Key responsible area. Every employee in the organization is expected to deliver output in his key responsible area. BOT will develop KRA’s based on the processes mapped and procedures developed.
Process Audits
Audits will be performed by BOT evaluating the performance based on KRA’s accomplished and other parameter fed in the system.
Reward System:
Smart contracts will have the responsibilities and rewards mentioned. After the process audit completion by BOT, smart contracts would auto execute rewards based on results provided by the BOT.
When the responsibilities expected out of employees and rewards offered are explicitly locked in smart contracts, there is little room for ambiguity or distrust. This can be achieved with the help of smart BOT. This can lead to healthier relationships between employees and employers. There will be less employee attrition and high productivity.
BOT will help bring productivity in the organization by streamlining operations and allowing management to only focus on things that are non-repetitive in nature. Standardizing execution helps in scalability. It’s the SOPs which help bring standardization. Standard Operating Procedures followed to the extreme level can bring exponential scalability. Large organizations have spent millions and billions in achieving operational excellence. For instance, no matter which McDonalds outlet you visit in the world, service, product, quality and process time involved remains more or less the same. This happens because of achieving standard operating procedure excellence. OBZICOIN with the help of smart process bot will strive to bring operational excellence in startups and SME’s giving them the competitive edge they’ve been lacking to compete with the giant corporations.
| How can OBIZCOIN help in improving healthy relationships between Employers and Employees and bring… | 1,572 | how-can-obizcoin-help-in-improving-healthy-relationships-between-employers-and-employees-and-bring-10425063cb30 | 2018-02-21 | 2018-02-21 13:48:00 | https://medium.com/s/story/how-can-obizcoin-help-in-improving-healthy-relationships-between-employers-and-employees-and-bring-10425063cb30 | false | 748 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Varun Shah | Co-Founder at Your Retail Coach and OBIZCOIN | 9ab0cb8539f9 | accessvarun | 45 | 17 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 4df293873d72 | 2017-12-14 | 2017-12-14 09:32:09 | 2017-12-15 | 2017-12-15 09:55:19 | 4 | false | th | 2018-03-21 | 2018-03-21 17:33:37 | 1 | 1042da5cd6ca | 1.277358 | 3 | 0 | 0 | ใช้ pinterest เก๋ๆ ด้วยฟีเจอร์ Visual Search ค้นแบบเก่ามันเชยไป ใช้แบบใหม่ดีกว่าไหม.. AI ก็มี Deep Learning ก็มา ^0^ | 5 | ลองเล่น Visual Search | ฟีเจอร์เก๋ๆ ใน pinterest.com
ใช้ pinterest เก๋ๆ ด้วยฟีเจอร์ Visual Search ค้นแบบเก่ามันเชยไป ใช้แบบใหม่ดีกว่าไหม.. AI ก็มี Deep Learning ก็มา ^0^
เมื่อสามวันก่อนได้อ่านบทความเรื่อง 5 เทรนด์เทคโนโลยีในไทยปี 2018 พบว่า 1 ใน 5 คือ Voice & Visual Search (การค้นหาด้วยเสียงและภาพ) ทำให้นึกถึงฟีเจอร์หนึ่งในเว็บไซต์ pinterest.com (Pin + Interest) เว็บฯที่รวบรวมรูปภาพเกี่ยวกับไอเดียและความคิดสร้างสรรค์ไว้มากมายให้ผู้ใช้งานสามารถ ‘ปักหมุน’ ไอเดียที่เราชอบเก็บไว้บนบอร์ดของตัวเอง concept ง่ายๆ แต่ได้ใจสุดๆ
ขอท้าวความก่อนว่า เราเป็นคนชอบงาน craft งาน hand made มาตั้งแต่เด็กๆ ที่จำความได้คือเด็กๆชอบทำการ์ด ถักไหมพรม นู่นนี่ พอเริ่มโตและจังหวะที่คอมพิวเตอร์เข้ามามีบทบาทในการใช้ชีวิต ก็เริ่มค้นหาไอเดียผ่าน internet จนได้เจอกับ pinterest ตั้งแต่สมัยที่ยังไม่ฮิตแพร่หลายเหมือนทุกวันนี้ ทำให้เห็นการพัฒนาที่เพิ่มขึ้นอยู่ตลอด ทั้งหน้าตาและฟีเจอร์ เมื่อก่อนทำได้แค่ปักหมุด กดหัวใจแดง แต่เดี๋ยวนี้มีให้แชร์ผ่าน social media ได้หลายช่องทาง
แต่ที่เก๋ไปกว่านั้นคืออออออ “Visual Search” นั่นเอง เอง เอง เอง เอง~ ปกติเวลาเราค้นหา ก็ใช้วิธีพิมพ์คำที่ต้องการและกดไอคอนแว่นขยายเพื่อค้นหาและดูผลลัพธ์ แล้วคุณ pinterest ผู้น่ารักก็จะแนะนำ Category ที่เกี่ยวข้องกับสิ่งที่เราค้นหามาให้ด้วย น่ารักขนาดนี้จะไม่หลงได้ไงไหว (=^_^=)
“Visual Search” พ่อพระเอกของเราอยู่ตรงไหน
เมื่อเราเลือกรูปที่เราสนใจขึ้นมา 1 รูป จะเห็นไอคอนสี่เหลี่ยมเล็กๆสีขาว คล้ายช่องมองภาพของกล้อง นั่นแหล่ะค่ะ Visual Search พระเอกตัวจริงของเรา ลองกดดูเล้ยยย
กดปุ๊ป ก็เหมือนเป็นการค้นหาด้วยภาพและแสดงผลลัพธ์เบื้องต้นขึ้นมาให้เรา และรูปต้นฉบับซ้ายมือ ก็จะมีกรอบสีขาวให้เราเลือกปรับได้ ว่าอยากจะโฟกัสการค้นหาไปที่วัตถุไหนในภาพ พอปล่อยเมาส์ปุ๊ป ระบบก็จะค้นหาให้เราทันทีโดยอัตโนมัติ ไม่ต้องกดปุ่มค้นหาใดๆอีก และได้ผลลัพธ์แบบ real time รวดเร็ว ตามกรอบช่องมองภาพที่เราขยับเลย มาลองเล่นกัน! ไหนลองโฟกัสไปที่ต้นคริสมาสต์สีขาวดูสิ๊
ท๊าดาาาา~* ผลลัพธ์การค้นหาทางขวาก็จะได้หน้าตาที่ใกล้เคียงกับต้นฉบับมากที่สุด ไม่ว่าจะเป็นสี หรือรูปร่างลักษณะ ดูฉลาดไหมล่ะ เก๋เนอะ
จริงๆแล้ว Visual Search ใน pinterest นี่ปล่อยออกมาให้ลองใช้ตั้งแต่ปลายปี 2015 แล้วนะ แต่เราเองก็เพิ่งมาสังเกตเห็นไอคอนหน้าตาประหลาดๆแปะอยู่บนรูปก็เมื่อปีที่แล้วนี้เอง แรกๆก็กดไปแบบ งงๆ เอ๊ะ อะไร ใช้ยังไง ลองไปลองมาก็ถึงบางอ้อ ว่ามันคือการค้นหาด้วยภาพ หรือ Visual Search นี่เอง ซึ่งจะว่าไปแล้วมันก็เป็น AI ตัวหนึ่งที่ใช้อัลกอริทึมแบบ Deep Learning
ฟีเจอร์นี้น่าจะถูกหยิบมาใช้ในระบบต่างๆมากขึ้นแน่นอน ไว้รอดู รอเล่น รอลองกันต่อไป หรือใครจะเอาไปพัฒนา plug ใส่ระบบของตัวเองก็เก๋ไปอี๊กกก
| ลองเล่น Visual Search | ฟีเจอร์เก๋ๆ ใน pinterest.com | 3 | ลองเล่น-visual-search-ฟีเจอร์เก๋ๆ-ใน-pinterest-com-1042da5cd6ca | 2018-10-03 | 2018-10-03 12:55:19 | https://medium.com/s/story/ลองเล่น-visual-search-ฟีเจอร์เก๋ๆ-ใน-pinterest-com-1042da5cd6ca | false | 153 | Programming, Technology, WorkLife, Storyteller, Lifestyle | null | thipwriteblog | null | thipwriteblog | null | thipwriteblog | PROGRAMMING,TECHNOLOGY,STORYTELLER,LIFESTYLE,WORK | thipwriteblog | Pinterest | pinterest | Pinterest | 3,719 | thip | ฝากติดตามเรื่องเล่าสั้นๆที่ Facebook Page : thip อีกหนึ่งช่องทางด้วยนะคะ : ) | 45771fa6a742 | thipz | 194 | 31 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-14 | 2018-01-14 12:43:30 | 2018-02-02 | 2018-02-02 14:22:21 | 8 | false | en | 2018-02-02 | 2018-02-02 15:01:54 | 6 | 1043ce5b9203 | 5.865409 | 8 | 0 | 0 | Apple open sourced their machine learning API called Turi Create recently which is a high level API for Machine Learning and Deep Learning… | 5 | Activity Monitoring with Apple’s Turi Create | Machine Learning
Apple open sourced their machine learning API called Turi Create recently which is a high level API for Machine Learning and Deep Learning. Turi Create is the simplest resource to get your models up and running in next to no time. I have been experimenting with this library and it gives you just what it promises. Turi Create is targeted at developers who do not necessarily have an expertise in Data Science. It is to be noted that Turi gives you very limited control over fine tuning your model. Given Apple’s expertise in developing the tool (originally acquired from GraphLab Create), the model does a great job in identifying your requirements and giving you a great model output that is ready for production. On the project’s github you can find examples to run deep learning models in as less as 5 lines of code.
In this example we shall be creating an Activity Monitoring Machine Learning model which takes Accelerometer and Gyroscope data from smartphone/smartwatches and classifies the data as Walking, Sleeping, etc. We shall see a summary of the internal working of this model after we code it up.
Data
For this exercise, we shall be using the Smartphone-Based Recognition of Human Activities and Postural Transitions Data Set from the UCI Machine Learning Repository. Simply download the zip folder HAPT Data Set.zip and extract it on your machine. If you are doing this on Google Colaboratory (which I have come to love recently) you can follow the following code. If you are doing this on the classic Jupyter Notebook or Spyder or others, skip this step and move to the next sFrame section of the code.
For colaboratory, you need to upload and unzip your data. If you have followed my previous articles, it is pretty straight forward: Upload the file to the Colaboratory Virtual Machine. Since it is loaded in the RAM, Write it to the Disk. Unzip it.
You can check to see if your files have been uploaded and extracted by using the ls command directly in the notebook cell.
Notice the HAPTdataset Directory is present
Before Proceeding, I recommend you read the Readme.txt file in the dataset folder to get a detailed understanding of the data (when you code this yourself). This step is the most crucial in your data science project.
Moving on…
Now we get Turi Create installed and good to go on our machine. !pip install -U turicreate (without the ‘!’ in your local machine terminal)
sFrame
sFrame is a data structure used by Turi Create to store Datasets. It is similar to a Dataframe but it is not constrained by the RAM of the machine running the code. This makes it a scalable data structure which can be used in big data as well. It is column immutable and supports out-of-core processing.
What do sFrames make simple? sFrames do not need to be loaded as a whole in the RAM when algorithms are run on the dataset. This minimizes the resources required for memory. The sFrame (as you shall see) is stored on your disk and picked up for processing when the operations are to be run.
This dataset contains text files of data from accelerometer and gyroscope from a smartphone/smartwatch in the Raw Data folder. Also in this folder is a labels.txt file which includes the labels. Note that the description of the labels is present in the README.txt which we shall refer while setting up the labels sframe.
After importing the turicreate library, we pull the labels from the labels.txt (preceeded by the path data_dir), delimiter is chosen as a space. Since the labels do not have headers, we need to define them referring the README.txt. Showing labels to check.
Now to load the training data. Before we load the data (for some structure) we will define a simple function to find label for the intervals we pick up from the data.
Get a mental image of your dataset before we begin to build it. In our sFrame, we need 3 axes values of the Accelerometer, 3 axes values of the gyroscope, exp_id, user_id and finally labels. Column names: exp_id — user id — acc_x — acc_y — acc_z — gyro_x — gyro_y — gyro_z — activity (labels)
Now we move on to rounding up our data from various files and pushing it into a single sFrame. Notice how we have the data in several text files. To get data out of these, we will need the glob library. Glob makes it very handy to get data out of files which follow a fixed naming convention. Here our Raw Data directory consists of 3 axis accelerometer data in acc_*.txt files while the 3 axis gyroscope data is in gyro_*.txt files. We first initialize a glob object to handle these files. With tc.sframe, we create a sFrame to fill the data into. Followed by a simple for loop which picks up the user_id and exp_id from the filenames in the Raw Data directory. Within that loop we fill the columns of our accel data and gyro data in their respective places with the right column names.
Most of our data is ready here. We need to make targets labels. Wq already have the activity_id column in our dataset which we shall replace with the actual names of the activities since Turi Handles all sorts of categorical data on it’s own and it will become very handy to get predictions in known formats when we need them.
We create a taget map for the activities in the Data. Then we use the activity_id column’s keys and generate a new column containing the activity names. Remove activity_id since we don’t need it anymore. Save data.
Model
Getting to the interesting and final part of the project. Firstly we create a test and train set. We define the parameters dataset (data), session id which is the unique id for every observation and the size of the training set: 80%.
Turi has a built in class for dealing with Activity Monitoring called the activity_classifier. To bild this classifier we need to specify certain parameters. The first is the dataset name train. Then the session_id which is the unique identifier for each sample in your dataset. Specifty taget values. And the prediction_window which is set to 50 since our samples were recorded at 50Hz (which is available in the README.txt file). There are other parameters you might need to set for which you need to check the documentation.
Watch the model train.
Evaluating the model is pretty straight forward. We evaluate the model on the test data we have kept aside. Our Model shows 69.90% accuracy which is not bad.
Saving the model. For Turi we save the model as a .model file. And for those who work with iOS development, you can export the model as a Core ML in the .mlmodel format.
Making predictions.
We take the walking instances from the dataset to create walking_3_sec in the range 1000:1150. We pass this as a parameter to the model.predict method to check if our model predicts them right.
Congratulations. You have successfully developed your activity monitor with Turi Create.
Please check Turi Create’s documentation on github to find out ways to tune your models. I shall be posting more tutorials about this API. Suggestions and corrections are welcome.
Find me on Facebook, Twitter and LinkedIn. I love to have conversations about Machine Learning.
(Edit: This article is still being updated and revised)
| Activity Monitoring with Apple’s Turi Create | Machine Learning | 109 | activity-monitoring-with-apples-turi-create-machine-learning-1043ce5b9203 | 2018-05-03 | 2018-05-03 21:25:46 | https://medium.com/s/story/activity-monitoring-with-apples-turi-create-machine-learning-1043ce5b9203 | false | 1,254 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Sagar Howal | Electrical Engineer. Data Science Enthusiast. Musician. Nerd. | 96ceaf04c572 | howal | 454 | 91 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 4292d921805f | 2018-04-18 | 2018-04-18 04:19:04 | 2018-04-18 | 2018-04-18 04:22:11 | 3 | false | zh-Hant | 2018-04-18 | 2018-04-18 04:24:36 | 4 | 1045fc0d1408 | 0.799057 | 3 | 0 | 0 | 作者 / 譚竹雯 | 5 | Women in Data Science (WiDS)系列工作坊《用TensorFlow玩Style Transfer》:自己動手做風格轉換
作者 / 譚竹雯
Women in Data Science (WiDS)系列工作坊第三場《用TensorFlow玩Style Transfer》,著重利用TensorFlow以及線上資源,並由講者Alicia帶領學員實作「風格轉換」 (Style Transfer) 作品。
(圖一為日本浮世繪藝術風格融合都會區摩天大樓,為風格轉換的作品。圖片來源:https://github.com/random-forests/WTM)
生活中的風格轉換
試想一下,一幅日本浮世繪的畫作和都會區摩天大樓能夠結合嗎?
以圖一為例,如果想要將浮世繪的筆觸與色調等套用在摩天大樓的相片上,也許可以利用修圖軟體或是一些濾鏡模仿。但只是單純更換顏色、紋理、明暗度的濾鏡,似乎顯得單調。想要達成圖一下方影像的效果,需要倚賴風格轉換來打造。
風格轉換背後的運作機制,是電腦透過機器學習的演算法,在取得指定相片後,擷取該相片的內容,並同時將藝術家畫作的風格擷取出來,再將兩者結合。
深度神經網絡的運作方式
實際上,電腦是透過深度神經網絡 (Deep Neural Networks) 的計算,分辨出相片的內容,以及畫作的風格。
電腦中深度神經網絡的運作方式,與人類大腦的運作機制相仿。人類大腦中有按層排列的個別神經元,藉由這些神經網絡辨識語音、圖片內容,完成生活各種行為。
影片 “What Does A.I. Have To Do With This Selfie?” (https://youtu.be/WHmp26bh0tI) 提到,當我們看到一張小狗圖片時,最底層的神經元會先掌握眼前物體外型粗略的輪廓,再上一層的神經元則會掌握物體的型態與樣貌等。越往上層的神經元走,我們能夠越加細緻地辨認物體,了解眼前坐著一隻小狗。
而風格轉換的實踐中,工程師將深度神經網絡的運作機制,應用於辨認圖片的情境。不同層級的神經元將分別找出相片的內容元素、畫作的風格元素,再將兩者結合。
自己動手做風格轉換
下半場活動,講者帶領學員嘗試風格轉換的效果(如圖二)。隨著套件的開發與開源資源的共享,風格轉換的效果有望越來越普及。想要進一步了解風格轉換、動手做出特殊效果,可以參考以下學習資源:
➔ GitHub Reference
https://github.com/random-forests/WTM
https://github.com/lengstrom/fast-style-transfer
➔ Use Docker Locally (only tested on mac)
https://docs.google.com/presentation/d/15I6QjUCxPKmOe6MsfPUVa4zyLlQDwtnCa03q-AHhRa0/edit#slide=id.g3511194302_0_927
(圖二為學員Nelly Chang的風格轉換作品)
| Women in Data Science (WiDS)系列工作坊《用TensorFlow玩Style Transfer》:自己動手做風格轉換 | 14 | style-transfer-with-tensorflow-1045fc0d1408 | 2018-04-21 | 2018-04-21 08:31:32 | https://medium.com/s/story/style-transfer-with-tensorflow-1045fc0d1408 | false | 66 | WiDS Taipei aims to inspire and educate data scientists worldwide, regardless of gender, and support women in the field. | null | WiDSTaipei | null | Women in Data Science Taipei | women-in-data-science-taipei | DATA SCIENCE,MACHINE LEARNING,EDUCATION | null | Events | events | Events | 25,821 | 譚竹雯 Chuwen Tan | 藉由文字的持續書寫,記錄所看所思,並期許能夠推動一點社會的改變,過無悔的人生。 | 9da16953f947 | chuwen_startup | 20 | 3 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-01 | 2017-09-01 10:20:19 | 2017-09-01 | 2017-09-01 10:21:09 | 1 | false | en | 2017-09-01 | 2017-09-01 10:21:09 | 1 | 1046653d2a1 | 0.464151 | 1 | 0 | 0 | Zeynep Akata, Assistant Professor at the University of Amsterdam will be presenting at the Deep Learning Summit in London. Zeynep will be… | 1 | Meet Zeynep Akata
Zeynep Akata, Assistant Professor at the University of Amsterdam will be presenting at the Deep Learning Summit in London. Zeynep will be discussing “Discovering and synthesizing novel concepts with minimal supervision”. Zeynep received Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014. Join Zeynep in London now as tickets are limited, by signing here now: https://re-work.co/events/deep-learning-summit-london-2017
| Meet Zeynep Akata | 1 | meet-zeynep-akata-1046653d2a1 | 2017-12-03 | 2017-12-03 20:32:46 | https://medium.com/s/story/meet-zeynep-akata-1046653d2a1 | false | 70 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | RE•WORK | Applying emerging technology & science to solve challenges in business and society. Deep Learning, Machine Intelligence & more! https://www.re-work.co/ | 3ae910353b87 | teamrework | 3,032 | 1,075 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-17 | 2017-11-17 14:23:28 | 2017-11-17 | 2017-11-17 18:16:06 | 1 | false | en | 2017-11-19 | 2017-11-19 18:43:49 | 20 | 10487c7b4b9f | 4.490566 | 3 | 0 | 0 | Artificial intelligence, machine learning and algorithmic decision-making are rapidly changing the web, our culture and the way we work … | 5 | More Than Robots
People are more than robots. The future can be more than robots. Image: ByronV2
Artificial intelligence, machine learning and algorithmic decision-making are rapidly changing the web, our culture and the way we work . Whilst the implications are huge, the rise of the robots has been less dramatic than sci-fi blockbusters led us to believe.
So far, the robot takeover has been incremental, invisible and generally quite banal — driverless cars and slaughterbots aside. Search is smarter, social is better connected and complex data is sorted quicker. But replicants and R2-D2 haven’t really arrived en masse.
Reading the news today, it feels like the ‘robotic’ future is now much better understood — and arriving very quickly.
However, the rapid advances in research, the application of technology in more areas of our lives, and the huge investment behind it all, are pushing real AI into the light and the impacts are being felt more keenly. Reading the news today, it feels like the ‘robotic’ future is now much better understood — and arriving very quickly.
Max Tegmark, scientist, author and founder of the Future of Life Institute, makes a compelling case for the need for AI safety research and to clearly articulate the human values we want AI to embody. His assertion is that without both of these in place the robots are very likely to take control — and not in the helpful ways we would like.
From disturbing and offensive algorithmic creations, to exploitation of crises and biased predictive policing we are beginning to see the many consequences of applying machine made decisions to people — and, as in most cases, children are disproportionately affected. It is today's children who will learn, work, participate in and protest against tomorrow's AI future — if they are able to. But it is also children who are most likely to be left out of machine-led decision making right now.
It is today’s children who will learn, work, participate in and protest against tomorrow’s AI future
If we acknowledge that data bias and poor design can lead to harmful AI and that children are most likely to be affected by AI, then the question naturally arises: how can children’s rights, views and needs be integrated into AI data, design and accountability?
Many organisations are now stepping up to tackle the complex moral, rights and safety issues that AI brings. In fact, research from the University of Oxford’s Future of Humanity Institute and others identifies AI as the most urgent global issue to tackle.
However, research and initiatives in this area are often shaped around adult perspectives and needs. There seems to be less understanding or focus on children’s rights and how children could play a meaningful role in shaping their future.
If we are careless or unprepared then we will create conditions that reduce people to robots or we will miss the chance to create conditions in which everyone can thrive.
If, on the other hand, we create AI that is wise enough to respect children’s rights, the data we use is not biased against children, and we enable young people to participate in the design, then AI offers many positive opportunities and a future that is much more than robots.
What to do…
How can individuals, schools and organisations nurture learning, improve services and innovate to create more positive, child centred outcomes for AI?
I have been considering some practical ways to help children avoid becoming robots or being ruled by them. I would really welcome your thoughts, better ideas and any examples of good things in action.
Perhaps our best defence against the dark arts of the robots is dance…
Child focussed AI ethics
The Asilomar principles provide a fantastic basis for the ethics and values that should underpin all AI development. Whilst many of the values are universal, I think a specific set of ethics and values for children — more closely aligned to the UN Convention on the Rights of the Child (UNCRC) — would help identify gaps and raise the specific implications of AI on children.
Minimum standards for AI applications
An output of these values may be some practical standards that can be applied in the design, training, deployment and regulation of AI — particularly applications that very directly impact children. For example: pre-emptive health interventions, education assessment and social care decisions.
Similar “safe by design” standards and calls for parity of protection online already exist for the web as it is now. The challenge is updating or creating standards that account for the AI led world that is emerging..
Digital literacy /digital citizenship
As AI becomes more advanced the way it works will become more frictionless and the decisions it makes about/for us will become increasingly opaque. Whilst learning to code will not necessarily enable children to create their own AI systems, a good understand of how algorithms work will help demystify some aspects. The current CS curriculum in England and other initiatives are delivering many of these opportunities and skills. However, technical understanding and competency needs to be applied to everyday life. I feel we need to find better ways to support children, parents/carers and teachers to understand and interpret the complex economic, cultural and personal ways emerging technology impacts us and our rights.
Arts and culture
In a future of rational, uber efficient, intelligent machines it will be easy to be out-competed by AI in the workplace — and in life in general.
Perhaps our best defence against the dark arts of the robots is dance… and poetry and comedy and art. If we are to thrive in the future, we should invest in the things AI struggle to do well.
Helping children nurture their empathy, unlock their creativity and have confidence in their silliness are essential parts of education. These talents, that are so undervalued now, may well become highly prized in an increasingly automated world.
What got me thinking…
AI Principles - Future of Life Institute
These principles were developed in conjunction with the 2017 Asilomar conference ( videos here), through the process…futureoflife.org
AI Does Not Have Its Own Intent
"Will AI take our jobs?" People tend to talk about AI as an autonomous agent. We anthropomorphize AI with human verbs…www.jfgagne.ai
Machine Bias - ProPublica
On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an…www.propublica.org
Urgent need to 'reconceive schooling' to ensure students can compete with AI
Updated November 09, 2017 07:14:45 Schools must urgently adapt to confront the enormous challenges presented by…www.abc.net.au
Artificial intelligence and the future of human rights
Amnesty International started its first structured programme of work on technology and human rights about three years…medium.com
The human use of emotional machines
Robots are no longer just an instrument of mass production, but they are rapidly integrating into our lives. The…hackernoon.com
| More Than Robots | 3 | more-than-robots-10487c7b4b9f | 2018-05-16 | 2018-05-16 15:59:57 | https://medium.com/s/story/more-than-robots-10487c7b4b9f | false | 1,137 | null | null | null | null | null | null | null | null | null | Children | children | Children | 22,434 | cliff manning | Happy skeptic. Interested in how tech impacts real people's lives through education, architecture, government, art & science. Trustee for @sconnections | 531cf67dd716 | cliffmanning | 277 | 372 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-26 | 2017-10-26 19:55:50 | 2017-10-26 | 2017-10-26 19:55:50 | 1 | false | en | 2017-10-27 | 2017-10-27 09:07:44 | 1 | 10489a28832a | 2.566038 | 5 | 0 | 0 | null | 5 | How Machine Learning is Revolutionizing Digital Enterprises
According to the prediction of IDC Futurescapes, Two-thirds of Global 2000 Enterprises CEOs will center their corporate strategy on digital transformation. A major part of the strategy should include machine-learning (ML) solutions. The implementation of these solutions could change how these enterprises view customer value and internal operating model today.
If you want to stay ahead of the game, then you cannot afford to wait for that to happen. Your digital business needs to move towards automation now while ML technology is developing rapidly. Machine learning algorithms learn from huge amounts of structured and unstructured data, e.g. text, images, video, voice, body language, and facial expressions. By that it opens a new dimension for machines with limitless applications from healthcare systems to video games and self-driving cars.
In short, ML will connect intelligently people, business and things. It will enable completely new interaction scenarios between customers and companies and eventually allow a true intelligent enterprise. To realize the applications that are possible due to ML fully, we need to build a modern business environment. However, this will only be achieved, if businesses can understand the distinction between Artificial Intelligence (AI) and Machine Learning (ML).
Machines that could fully replicate or even surpass all humans’ cognitive functions are still a dream of Science Fiction stories, Machine Learning is the reality behind AI and it is available today. ML mimics how the human cognitive system functions and solves problems based on that functioning. It can analyze data that is beyond human capabilities. The ML data analysis is based on the patterns it can identity in Big Data. It can make UX immersive and efficient while also being able to respond with human-like emotions. By learning from data instead of being programmed explicitly, computers can now deal with challenges previously reserved to the human. They now beat us at games like chess, go and poker; they can recognize images more accurately, transcribe spoken words more precisely, and are capable of translating over a hundred languages.
In order for us to comprehend the range of applications that will be possible due to ML technology, let us look at some examples available currently:
Both types of devices provide an interactive experience for the users due to Natural Language Processing technology. With ML in the picture, this experience might be taken to new heights, i.e., chatbots. Initially, they will be a part of the apps mentioned above but it is predicted that they could make text and GUI interfaces obsolete!
ML technology does not force the user to learn how it can be operated but adapts itself to the user. It will become much more than give birth to a new interface; it will lead to the formation of enterprise AI.
The limitless ways in which ML can be applied include provision of completely customized healthcare. It will be able to anticipate the customer’s needs due to their shopping history. It can make it possible for the HR to recruit the right candidate for each job without bias and automate payments in the finance sector.
Business processes will become automated and evolve with the increasing use of ML due to the benefits associated with it. Customers can use the technology to pick the best results and thus, reach decisions faster. As the business environment changes, so will the advanced machines as they constantly update and adapt themselves. ML will also help businesses arrive on innovations and keep growing by providing the right kind of business products/services and basing their decisions on a business model with the best outcome.
ML technology is able to develop insights that are beyond human capabilities based on the patterns it derives from Big Data.
Posted on 7wData.be.
| How Machine Learning is Revolutionizing Digital Enterprises | 11 | how-machine-learning-is-revolutionizing-digital-enterprises-10489a28832a | 2018-03-14 | 2018-03-14 16:01:14 | https://medium.com/s/story/how-machine-learning-is-revolutionizing-digital-enterprises-10489a28832a | false | 627 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Yves Mulkers | BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world | 1335786e6357 | YvesMulkers | 17,594 | 8,294 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-04 | 2018-08-04 07:02:22 | 2018-08-04 | 2018-08-04 07:07:56 | 1 | false | en | 2018-08-04 | 2018-08-04 07:07:56 | 2 | 104acb71e36e | 1.464151 | 0 | 0 | 0 | At the New York Summit a couple of days back we propelled two new Amazon SageMaker highlights: another clump deduction include called Batch… | 4 | About Amazon Prime Day Sale
At the New York Summit a couple of days back we propelled two new Amazon SageMaker highlights: another clump deduction include called Batch Transform that enables clients to make expectations in non-constant situations crosswise over petabytes of information and Pipe Input Mode bolster for TensorFlow holders.
SageMaker stays one of my most loved administrations and we’ve secured it broadly on this blog and the machine learning blog. Truth be told, the fast pace of development from the SageMaker group is somewhat difficult to stay aware of.
Since our keep going post on SageMaker’s Automatic Model Tuning with Hyper Parameter Optimization, the group propelled 4 new inherent calculations and huge amounts of new highlights. How about we investigate the new Batch Transform highlight.
Clump Transform
The Batch Transform include is an elite and high-throughput strategy for changing information and creating derivations. It’s optimal for situations where you’re managing substantial clumps of information, needn’t bother with sub-second dormancy, or need to both preprocess and change the preparation information. The best part? You don’t need to compose a solitary extra line of code to make utilization of this component.
You can take the greater part of your current models and begin bunch change employments in view of them. This component is accessible at no extra charge and you pay just for the hidden assets.
We should investigate how we would do this for the inherent Object Detection calculation. I took after the illustration scratch pad to prepare my protest location display. Presently I’ll go to the SageMaker comfort and open the Batch Transform sub-reassure.
Here I can name my change work, select which of my models I need to utilize, and the number and sort of cases to utilize. Moreover, I can design the specifics around what number of records to send to my surmising simultaneously and the extent of the payload. On the off chance that I don’t physically indicate these then SageMaker will choose some sensible defaults.
| About Amazon Prime Day Sale | 0 | about-amazon-prime-day-sale-104acb71e36e | 2018-08-04 | 2018-08-04 07:07:57 | https://medium.com/s/story/about-amazon-prime-day-sale-104acb71e36e | false | 335 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Amazon Coupons US | null | a088c1a7fdcd | amazoncouponsus | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-13 | 2018-06-13 21:35:22 | 2018-06-14 | 2018-06-14 19:14:40 | 2 | false | en | 2018-07-10 | 2018-07-10 12:56:21 | 6 | 104ce9cd760a | 4.711635 | 17 | 0 | 0 | Supply chain is an interesting term. What is it? It is the chain of events, vendors, logistics, purchases, parts, people, and products that… | 5 | Synthetic Aperture Radar image of the the Port of Shanghai: the busiest port in the world and ground zero for most of the world’s supply chains
Rise of the Living Supply Chain
Supply chain is an interesting term. What is it? It is the chain of events, vendors, logistics, purchases, parts, people, and products that lets a business exist and serve a customer. It’s every action that takes place from production to delivery, all the way to customer satisfaction. Even returns go back through the supply chain.
When we think about making and moving things like this, we come to understand that a supply chain is the physical body that defines a company. You cannot separate a company from its supply chain. The chain is often the single greatest contributor to its success or failure. This was a major theme at the Gartner Supply Chain Executive Conference held in Phoenix last month.
So who oversees these supply chains? A lesser known member of the executive team who often controls a majority of the spending and internal decisions — the Chief Supply Chain Officer (CSCO). These leaders are increasingly being selected as the the successor to the CEO. Tim Cook is the notable example here: he took over as CEO after modernizing the company’s supply chain.
Supply Chained to the Past
In an incredible number of cases, within the Fortune 500, supply chains are stuck in the past. Huge, successful companies dealing with physical goods have grown over many years without viewing the supply chain as a holistic system. As they grew more and more complex, their supply chains became spaghetti systems: difficult to modify, slow to change, and unable to communicate with other parts of the system.
Supply chains are often highly reactive. They’re managed using internal and historical data: factory inputs and outputs, internally derived product deadlines and schedules, suppliers who are chosen based on staff relationships, quality control, price, and historic performance. The list goes on. The common denominator is that the answers come from inside the organization.
Imagine a supplier disruption as an example. Today, if a supplier has a major disruption (wildfire, hurricane, worker strike, etc.), they notify you via email or phone. Your organization and its supply chain react to this news (and put out the other fire of missed expectations with customers, investors, etc.) and then log the disruption into a historical database. When the dust settles, you still only have your internal data relative to that supplier. How can you forecast another disruption with only that?
The Digital Advantage
Amazon has shown us that a supply chain can be used as a distinct competitive advantage. This is the the dawn of a new age of “digital” supply chains. At the core of this digitization movement are cloud based-analytics and the Internet of Things (IoT) which focus on monitoring goods, vehicles and other assets in order to predict changes and model different scenarios. The hope is to evolve the supply chain into a forward-looking thinking machine as opposed to a purely reactive one.
Many firms are beginning to incorporate the concept of a “digital twin.” A piece of machinery or an entire factory can have digital twins. The twin includes detailed information on the asset’s identity, location, usage, output, health and financial history. Supply chain managers can use a twin to forecast impacts on production when something fails. The simulations show where to spend resources (i.e. — perhaps you should you buy a backup for this machine).
Working for Descartes Labs, a company which is using remote sensing technologies (like satellites) and machine intelligence to build a living atlas of the planet, one thing struck me during the conference: companies still tend to look inward when thinking about their supply chains. They monitor their machines, their factories, trucks and ships carrying their goods. They monitor (and even digitize) the objects they have deemed important to their supply chain. But what about the context in which those objects exist?
William James has a great quote about context: “To know an object is to lead to it through a context which the world provides.” Supply chains are increasingly digital but still very internal in nature, and we are quickly gaining the capability to provide complete global context for these supply chains and all their constituent pieces. The time has come to begin merging internal data with the external forces that drive the supply chain.
The Supply Chain of the Future is a Living System
Imagine a world where you have access to all of your digital supply chain data. This is fused with a complete living-picture of the entire planet — the context for where that data lives. “Supply sensing” and its attractive twin, “demand sensing,” are now possible due to advances in localized, remote sensing and machine intelligence.
Why are we guessing what and when to buy? By monitoring the total supply of raw materials on a global scale, the best possible purchasing decision becomes clear.
Take an example in moving a crop — like bananas. Bananas are picked green. Ripening is carefully temperature controlled as they are shipped to consumers all across the world. Any delay in the supply chain can lead to an unsaleable product.
The unique signature of a banana tree lets Descartes Labs see everywhere in the world bananas are grown, even through clouds. This image was captured using Sentinel-1 SAR (synthetic aperture radar) above Costa Rica exposes banana plantations (in yellow).
What if supply chain managers could easily forecast which ports were going to be congested in the near future so alternative modes of transportation or routes could be considered before shipping? Risks associated with choosing an individual supplier’s factory due to location, historic weather patterns, or even factory on/off signals from space-borne sensors could be evaluated. The full story about a supply disruption could be known, well before a panicked phone call from a vendor.
Having an understanding today of who your customers are and where they live is no longer enough. Forecasting where they will be in the future to ensure product or services are delivered at the right place and time will become a competitive decision advantage. Satellite data can forecast the growth of cities on a microscale. With this view, producers can estimate demand for critical infrastructure like roads, houses, and buildings — to be constructed in near-real time.
The commercial enterprise supply chain of the future will be aware of itself and its surroundings. It will get smarter over time by continuously adding new data, learning, and delivering new insights. The benefits of a living supply chain are profound.
So many companies today are trying to simplify and streamline — to reduce complexity anywhere they can. Their supply chains have to ride the wave of digital advances. Winners and losers will be defined not by those that make the best use of their data, but those that make use of all data. Those who understand the context of their decisions will lead.
| Rise of the Living Supply Chain | 47 | rise-of-the-living-supply-chain-104ce9cd760a | 2018-07-10 | 2018-07-10 12:56:21 | https://medium.com/s/story/rise-of-the-living-supply-chain-104ce9cd760a | false | 1,147 | null | null | null | null | null | null | null | null | null | Supply Chain | supply-chain | Supply Chain | 6,262 | James Orsulak | Purveyor of geospatial machine intelligence | The eye of supply | Space-based Agronomist | Asteroid miner | 497f564a6d28 | JamesOrsulak | 15 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1ae21a6d798d | 2017-11-06 | 2017-11-06 17:41:56 | 2017-11-07 | 2017-11-07 15:39:43 | 1 | false | es | 2017-11-09 | 2017-11-09 14:11:50 | 2 | 104d323a1205 | 4.120755 | 4 | 0 | 0 | Nuria Oliver es Directora de Ciencias de Datos en Vodafone, Científica de Datos Principal en Data-Pop Alliance y Asesora Científica del… | 5 |
Cómo usar el Big Data para lograr el bien común: una entrevista a Nuria Oliver
Nuria Oliver es Directora de Ciencias de Datos en Vodafone, Científica de Datos Principal en Data-Pop Alliance y Asesora Científica del Instituto Vodafone. También Doctorada en inteligencia perceptual en el MIT, Nuria Oliver tiene más de 20 años de experiencia en investigación.
En este caso, queremos concentrarnos en su trabajo en la Data-Pop Alliance, en donde investigadores, expertos y activistas se unen de manera colaborativa para buscar como conseguir tener impacto positivo en la sociedad a través del Big Data.
¿Cómo se utiliza el Big Data para lograr el bien común?
Hay muchas maneras y muchos tipos. Hay proyectos que utilizan datos de social media (como Foursquare o Twitter) y otros de la red de telefonía móvil, que es el tipo de datos con el que tengo mas experiencia. En este caso, uno de los potenciales que tiene el móvil es considerarlo como un sensor de humanidad. Hay mas móviles que seres humanos, y el porcentaje de adopción es muy grande tanto en países en vía de desarrollo como en países desarrollados. Al estar siempre con nosotros y al estar conectados, nos ayudan a entender el comportamiento de poblaciones enteras. Existen variables muy valiosas para estudiar como la estimación del numero de personas en regiones, comarcas y países, así como la de movilidad. Claro que siempre hablamos de datos agregados y anonimizados, preservando la privacidad.
Estas variables son valiosas en situaciones como un desastre natural donde es importante poder estimar cuánta gente se ha visto afectada, dónde se encuentran y si ha habido desplazamientos causados por el desastre natural. Este tipo de preguntas se pueden responder a partir del análisis de los datos agregados y anonimizados de la red de telefonía móvil, de manera más precisa que como hemos hecho hasta ahora, que ha sido básicamente a partir de encuestas y observaciones. También sirven para entender el desarrollo económico de una región o para detectar automáticamente los puntos calientes de crimen de una ciudad. En este caso, los niveles de actividad en las torres celulares ayudan a entender la dinámica de la ciudad que según los estudios esta muy relacionada con la seguridad.
Muchos de los datos del Big Data son generados por empresas privadas. ¿Cuál es el camino para lograr que esa información se utilice para el bien común en lugar de fines comerciales como por ejemplo el inmobiliario, venta de productos o venta de información?
Es un área en la que se está trabajando mucho y debatiendo en el contexto de las Naciones Unidas o la asociación de la industria de móvil, el GSMA. En el GSMA existe una iniciativa llamada Mobile Data for Social Good, a la que pertenecen 19 operadoras, incluyendo Vodafone, y que está dedicada a realizar proyectos que tengan impacto positivo en la sociedad. Además, existe Data Pop Alliance, donde soy Chief Data Scientist y colaboro en proyectos que están 100% dirigidos a ese fin.
Uno de ellos se llama OPAL y consiste en implementar una plataforma que permita a terceras partes autorizadas (como gobiernos , ONGs o Naciones Unidas)ejecutar algoritmos y hacer queries sobre datos privados sin que los datos tengan que salir de las premisas donde están almacenados y siempre preservando la privacidad.
De manera general, ¿nos podrías explicar qué tipo de procesamiento se lleva a cabo para obtener conclusiones con tanta información? ¿Qué cantidad de datos manejan para cada caso?
Los sistemas de Big Data son sistemas de procesamiento distribuido de cantidades ingentes de datos. Utilizamos machine learning (desarrollar técnicas que permitan a las computadoras aprender) para el análisis de estos datos almacenados en plataformas diseñadas para el big data. En cuanto a la cantidad, depende realmente del tamaño del estudio pero podríamos estar hablando de Petabytes.
En Lateral View desarrollamos productos relacionados al Internet de las Cosas, que conectan dispositivos entre sí. ¿Hacia dónde piensas que deberíamos apuntar nuestros esfuerzos pensando en el bien común?
Yo creo que para cualquier proyecto es importante intentar entender por qué lo elegimos y por qué se valora. El mundo de IoT es amplísimo. Puede ser tanto en hospitales y ciudades como en transportes y educación. Hay muchas maneras de tener un impacto positivo en las vidas de los ciudadanos en cuanto a su experiencia en las ciudades, por ejemplo, con personas mayores que quieran seguir viviendo en sus hogares y haya productos conectados por Internet que se lo facilite y al mismo tiempo les permita estar monitorizados, conectados con sus seres queridos, etc. Lo importante es identificar el área dónde trabajar y el problema a resolver para saber qué valor podemos aportar.
En muchas de tus charlas hablas sobre la importancia de ser eruditos digitales porque de lo contrario las consecuencias serían gravísimas. ¿Nos explicarías un poco más sobre esto?
Es un concepto que utilizo para describir y enfatizar una situación que estamos confundiendo en la actualidad sobre la tecnología. Aunque la gran mayoría de nosotros usamos tecnología en nuestro día a día y no podemos vivir sin ella, solemos usarla de manera superficial, sin saber realmente como funciona.
El primer mensaje es cambiar la idea de que usar tecnología nos convierte en expertos en la misma. Hay que enseñar como funciona la tecnología. Sería recomendable incluir una disciplina que se llama pensamiento computacional en el currículo educativo desde primaria. El pensamiento computacional es más que programar, incluyendo, además de la programación, habilidades como la resolución de problemas de manera modular, la representación y el análisis computacional de los datos, el hardware, las redes de comunicación, etc. Ademas, es fundamental que acompañemos este tipo de competencias con el desarrollo de competencias de las inteligencias social y emocional: aceptar una gratificación a largo plazo, saber estar consigo mismo, saber gestionar el aburrimiento, tener la capacidad de enfocarse en una tarea durante un periodo de tiempo largo y tener sentido critico. Es importante educar para desarrollar tanto las competencias técnicas como las competencias de las inteligencias social y emocional que la tecnología de hoy en día no nos permite cultivar y que considero son importantes y necesarias para nuestro desarrollo y bienestar.
¿Quieres saber más sobre Tecnología e Innovación? Visita Lateral View o suscríbete a nuestro newsletter!
| Cómo usar el Big Data para lograr el bien común: una entrevista a Nuria Oliver | 26 | cómo-usar-el-big-data-para-lograr-el-bien-común-una-entrevista-a-nuria-oliver-104d323a1205 | 2018-05-10 | 2018-05-10 21:19:39 | https://medium.com/s/story/cómo-usar-el-big-data-para-lograr-el-bien-común-una-entrevista-a-nuria-oliver-104d323a1205 | false | 1,039 | Spreading Technology | null | lateralview | null | Elevate by Lateral View | elevate-by-lateral-view | DESIGN,INNOVATION,CREATIVITY,TALKS,COOL STUFF | lateralview | Tech | tech | Tech | 142,368 | Florencia Echevarria | CMO @lateralview. I help companies innovate through Design & Tech. | 653c90d1e23e | floreche | 80 | 75 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-11-15 | 2017-11-15 22:02:27 | 2017-10-29 | 2017-10-29 20:16:54 | 0 | false | en | 2017-11-15 | 2017-11-15 22:04:54 | 5 | 104fcb5263f3 | 0.14717 | 0 | 0 | 0 | Jeremy Hindle is the Co-Founder and CTO of Headstart. | 5 | How Headstart uses machine learning to help employers identify the best-suited candidates
Jeremy Hindle is the Co-Founder and CTO of Headstart.
The Rozee podcast is hosted by Rosalie Bartlett.
Originally published at rozee.co on October 29, 2017.
| How Headstart uses machine learning to help employers identify the best-suited candidates | 0 | how-headstart-uses-machine-learning-to-help-employers-identify-the-best-suited-candidates-104fcb5263f3 | 2017-11-15 | 2017-11-15 22:04:55 | https://medium.com/s/story/how-headstart-uses-machine-learning-to-help-employers-identify-the-best-suited-candidates-104fcb5263f3 | false | 39 | null | null | null | null | null | null | null | null | null | Human Resources | human-resources | Human Resources | 9,735 | Rozee | Content + Community for all things interesting in #Enterprise #AI! Get the Daily Rozee in your inbox, Mon-Fri: http://eepurl.com/c9sahD. | 83c3dd12d9c6 | heyrozee | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-22 | 2018-05-22 14:14:02 | 2018-05-23 | 2018-05-23 13:59:43 | 11 | false | id | 2018-05-23 | 2018-05-23 13:59:43 | 3 | 1050927f594f | 2.937736 | 2 | 1 | 0 | Halo Kamerad, di catatan sebelumnya mengenai Hadoop saya menjelaskan tentang Hadoop Ecosystem tentang elemen apa saja yang ada pada Hadoop… | 5 | Big Data dengan Hadoop (Install Virtual Box & Cloudera Distribution) — Part #3
Halo Kamerad, di catatan sebelumnya mengenai Hadoop saya menjelaskan tentang Hadoop Ecosystem tentang elemen apa saja yang ada pada Hadoop Ecosystem. Pada catatan kali ini saya akan menjelaskan pengalaman saya meng-install Hadoop dari Cloudera distribution pada PC saya dengan bantuan VirtualBox.
Tanpa bertele-tele lagi berikut saya jelaskan bagaimana langkah-langkahnya:
Download virtualization software/container
Ada bermacam-macam software untuk virtualisasi seperti contohnya VMware, KVM, dan Oracle VirtualBox. Pada tutorial ini saya menggunakan VirtualBox. Untuk download VirtualBox silahkan buka https://www.virtualbox.org/, kemudian karena saya pakai PC dengan OS Windows maka saya pilih download versi host untuk Windows.
download virtual box
Download Cloudera Distribution
Sebenarnya Cloudera Distribution ini cuma salah satu dari sekian banyak Hadoop Distribution, karena Cloudera sudah menyediakan versi gratis untuk trial jadi saya pilih Cloudera Distribution. Silahkan buka https://www.cloudera.com/ dan pada tab download pilih quicstrart vms. Setelah masuk halaman download quicstart vms, pada opsi select platform pilih VirtualBox karena saya akan menggunakan VirtualBox tentunya seperti pada gambar di bawah ini. Selesai download lanjut ekstrak file .zip.
download cloudera vms
download cloudera
Install VirtualBox
Setelah file selesai didownload semua dilanjutkan install VirtualBox. Setelah proses install selesai langsung start Oracle VirtualBox.
install virtualbox
start virtual box
Create Virtual Machine
Klik new kemudian masukan nama Virtual Machine-nya, untuk nama bebas, Tipenya pilih Linux, dan Version-nya pilih other Linux.
create virtual machine
Tentukan Jumlah RAM
Selanjutnya menentukan jumlah RAM yang akan dialokasikan untuk Virtual Machine, saya memilih mengalokasikan setengah RAM PC untuk Virtual Machine.
Pilih Virtual Machine Disk File
Klik next, di Hard Disk pilih Use an existing virtual hard disk file. Pilih file .vmdk dari Cloudera yang sudah di ekstrak tadi. Lalu klik create.
Choose a virtual hard disk file
Setting
Selesai membuat Virtual Machine, selanjutnya klik setting, di tab general ada tab Advanced. Saya menyetting Shared Clipboard di Bdirectional dan Drag’n Drop di Host to Guest seperti gambar di bawah. Lalu pilih Start untuk memulai Virtual Machine. Lalu tunggu Virtual Machine sampai selesai untuk proses setup environment yang agak lama sekitar 15 menit.
setting
setup environment
Masuk Virtual Machine
Setelah proses setup environment selesai, akan muncul Virtual Machine Cloudera dan di arahkan ke cloudera quickstart di web browser, ini seperti kita sedang melakukan simulasi mengelola dua komputer, seperti yang kita ketahui Hadoop menggunakan konsep cluster bisa dilihat ada satu Manager node atau PC kita, dan 1 worker node yaitu si Virtual Machine, karena kita hanya 1 node cluster maka hanya ada 1 worker node. Apabila kita scroll ke bawah bisa kita lahat ada tutorial untuk package apa saja yang ada di cloudera ini.
quick start
Sekian tutorial install dan setup Hadoop dengan cloudera distribution, di catatan selanjutnya akan saya praktekan bagaimana cara menggunakannya untuk memahami konsep HDFS.
| Big Data dengan Hadoop (Install Virtual Box & Cloudera Distribution) — Part #3 | 2 | big-data-dengan-hadoop-install-virtual-box-cloudera-distribution-part-3-1050927f594f | 2018-05-27 | 2018-05-27 15:28:24 | https://medium.com/s/story/big-data-dengan-hadoop-install-virtual-box-cloudera-distribution-part-3-1050927f594f | false | 434 | null | null | null | null | null | null | null | null | null | Hadoop | hadoop | Hadoop | 1,573 | Farhan | Catatan pemahaman saya yang sedang mempelajari Big Data, dan Machine Learning. Portfolio lainnya: github.com/theinternetbae/ | 4d381588ff6e | theinternetbae | 51 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-22 | 2018-05-22 21:12:48 | 2018-05-22 | 2018-05-22 21:17:02 | 4 | true | en | 2018-05-22 | 2018-05-22 21:32:04 | 5 | 105122d099a5 | 4.054717 | 1 | 0 | 0 | All around the world both good and bad happens, and we get to know only those that are exposed to us. And, that’s the primary… | 5 | Impact of Linguistic choice of words in News articles on our Society
All around the world both good and bad happens, and we get to know only those that are exposed to us. And, that’s the primary responsibility of the media. But the bigger responsibility of these media houses is the way in which they express the content to the people.
A responsible media house’s content should be original, unbiased, free of exaggeration and should be very sensitive in handling the emotions of it’s readers and viewers. A same story could be told in different ways and these different ways could definitely trigger different emotions among it’s readers.
It is known that we become who we are by what we say and what we read. Reading a story that’s filled with positive words would make us feel more positive and vice versa. So the wordings of a content definitely plays an equal role as that of the content itself.
This project aims to answer how some of the major media houses in USA are giving importance to the wordings of their content. The answer would allow the readers to wisely choose their daily source of news that truly cares about its readers.
For detailed information check out the ‘Detailed Research resources’ section.
Assumptions/Target Audience:
Our target audience are prone to ALL the articles published in the home page.
Data has been scrapped from the resources at the same time(since it gets updated regularly).
Only the USA News web market is considered for this research.
CNN, Foxnews, nytimes, huffingtonpost, reuters are the top news websites considered based on the unique visitor count obtained from the research.
Our Sample considers only the articles published in these websites at 10am(CST).
1) Data Extraction/Preparation Phase:
The Data is collected through script which using Newspaper3K API. The script is designed to collect all the articles published at 10am(CST) in the above mentioned news homepage. Here is a sample image of few articles published on 10/17/2017, 10am in Reuters.com
I then pipelined these raw text into CSV forma, segregated into columns(as shown below) for easy exploration.
The data as CSV file has the following columns:
TITLE: the Title of the article.
SUMMARY: first few lines of the article’s text.
TEXT: Full text inside the article
URL: web link to the article.
KEYWORDS: important words in the article.
It is also to be noted that all the articles published in their webpage doesn’t have to be from their own news editors. For instance reuters article is shared in the homepage of HuffingtonPost.com.
2) Preprocessing/Cleaning Phase:
My concern is to analyze only the textual content of the article. Thus, only the text(from the text column of the csv file) data is tokenized.
Major issue with these billions and billions of content is that most of which are not relevant to our analysis. So we do language preprocessing and then we build a JSON file format storage all these tokenized vocabulary content for faster access of only the relevant tokenized term contents for our analysis phase.
3) Analysis/Model Building Phase:
Let’s check the distribution of negative words(words that have a negative connotation), as shown below. The media house with least projection of these negative words is Foxnews followed by The New York times. They deliver the content in more optimistic way than their counterparts. Thus our Net score is calculated using the equation:
Net Negative Score=∑Negative termsper media × Sentiment score
However to make the comparison more fair, we also need to check all vocabulary content of the article, that includes both the usage of positive and negative words. It has been found that the content of the Foxnews articles are MORE than that of the Newyork times. So to do a fair analysis, we factor it using normalization. Thus a Normalized score is introduced!
A Normalized score is the net sentiment score of all articles to that of the total number of term usage across all the articles in a day(which is specific to each media houses).
Net Normalized Score=∑(termsper day * Sentiment score)∕Total Number of terms
Conclusion:
As Seen from the above plot we can infer that The New York times plays an important role in not only conveying the News but also in a healthy way(comparatively more optimistic). Thus I recommend New york time , for those specific target auidience of web users who just want to have good taste of daily news.
"People like to think they're objective and making decisions based on numbers," Dr. Lera Boroditsky said. "They want to believe they're logical. But they're really being swayed by metaphors."
Detailed Research resources:
How the words we use affect the way we think.
According to new research by Stanford psychologists. Your thinking can even be swayed with just one word, they say.
There is a famous concept called Law of attraction written by Rhonda Byrne in her book Secret. It says that we become who we are by what we say!
Lera Boroditsky: How language shapes the way we think.
Future Scope:
Increasing the Sampling size.
Increasing the spectrum of target audience.
Building a more specific word-connotation scoring system.
How does News Media play role in development of different nations(like in USA, India, Singapore).
Final Remarks: All the data collected and used are open to access to any individual under this License.
| Impact of Linguistic choice of words in News articles on our Society | 1 | impact-of-linguistic-choice-of-words-in-news-articles-105122d099a5 | 2018-05-23 | 2018-05-23 14:52:09 | https://medium.com/s/story/impact-of-linguistic-choice-of-words-in-news-articles-105122d099a5 | false | 889 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Harish Gandhi | I am CS graduate with expertise in Data exploration through Software and Machine Learning Techniques | 12e9de977ddb | hramachandran | 3 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-24 | 2018-06-24 01:02:49 | 2018-06-25 | 2018-06-25 18:44:27 | 11 | false | en | 2018-06-25 | 2018-06-25 18:44:27 | 18 | 10512e2fc434 | 12.658491 | 13 | 0 | 0 | So you’ve got the basics down from Part 1. Now you want to know what software to know for the specific role you want at the autonomous… | 5 | What Software Do Autonomous Vehicle Engineers Use? Part 2/2
So you’ve got the basics down from Part 1. Now you want to know what software to know for the specific role you want at the autonomous vehicle company you’re pursuing. As we mentioned previously, those roles are plentiful, and there are tens of tools available for each task in any given position.
Introduction
The following sections will breakdown the tools autonomous vehicle engineers use by their purpose. Again, these tools are constantly changing, and new ones are added daily. Per our methodology, when meticulously combing through every single careers page from 31 autonomous vehicle companies, we listed any tool or skill that appeared in more than 3 companies, and bolded ones that showed up in more than 10. We wrote a note about some of the most common. It didn’t matter how many times it showed up at a particular company, we wanted the tool to be used across the industry to include it. Therefore, a tool your company uses daily may not have made the cut.
Feel free to comment below and let us know if we forgot anything glaring, or if any new tool should be included that we forgot.
Tools of the Trade
Vehicle Design — Concept
If you’re looking at being the person in charge of designing the vehicle concept itself, or maybe creating marketing materials, these are the top tools for you:
Sketch, Adobe Illustrator, Adobe Photoshop, Microsoft Paint (Just Kidding)
Adobe Illustrator is an environment in which users can create and edit 2D images. It is considered one of the most popular software products for creating conceptual images of vehicles and general marketing images.
Figure 1: Bugatti Type 57 T Concept Rendering by Artur B. Nustas
Vehicle Design — CAE/CAD/PLM/FEA
If you’re interested in the mechanical design of the vehicle and its various components, you’ll need to know at least one of these. These tools are all Computer Aided Engineering tools (CAE). What used to be done with a pen and paper and a slide ruler is now done all on the computer, with fantastic visualization tools. Computer Aided Design (CAD) is 2D and 3D modeling of the vehicle and its parts. Product Lifecyce Management (PLM) is a comprehensive suite of tools that let you organize all the parts you design and purchase. Finite Element Analysis (FEA) is a computationally-intensive program that lets you test the stress and strain of bodies in software as the different forces and moments are applied to them.
CATIA, Solidworks, Pro/E, Autodesk 360, Enovia, STAR-CCM+, ANSA, Altair HyperWorks and OptiStruct, ANSYS, MSC NASTRAN, Abaqus, Polarion
CATIA is a very popular 3D engineering design tool. It lets you create and edit 3D images, and then simulate how they will be built and stressed. It is especially popular in the automotive industry.
Figure 2: 3D Image Created with CATIA (Source)
Multibody Vehicle Dynamics and Vehicle Model Simulation
If you want to design how the car actually drives, this section is for you. These software tools help engineers design suspension, brakes, and deliver power to the wheels. This is software lets engineers build out representative models of the vehicle in software and test it in real-life driving scenarios, to optimize its performance. It’s also how the engineers know where the limit of the vehicle is in any given situation, which is instrumental in letting the autonomous software know how to navigate emergency scenarios. This is the same software that race car engineers use!
LS-DYNA, MSC ADAMS, CarSim, CarMaker, Dymola, OptimumG, SusProg3D, Oktal SKANeR
MSC ADAMS is a software tool for solving and visualizing all of the math that does into multibody dynamics. In particular, it is an extremely powerful tool for multibody vehicle dynamics that can provide engineers immediate feedback on their vehicle designs.
Figure 3: MSC ADAMS Example Use (Source)
Analog and Digital Hardware Development
There’s a ton of tools that engineers use to design the circuitry that goes into autonomous vehicles, and we won’t go into much detail here. Just know, these are the two most common tools we see in this space:
LTSpice, Altium
Vehicle Software Development — General Knowledge
There’s a lot of software that falls into the “general knowledge” bucket that having a knowledge of will substantially help certain job functions. These are cross-team, so we included them here. We don’t anticipate anyone knows all of these, but it would be good to have an understanding of what at least some of them do before walking into a job interview:
Docker, CMake, Shell, Bash, Perl, JavaScript, Node.js, React, Go, Rust, Java, Redux, Scala, R, Ruby, Rest API, gRPC, protobuf, Julia, HTML5, PHP
Docker is a virtualization layer for distributing applications. It is a much simpler and streamlined mechanism for developing and distributing software due to the fact it can “containerize” all of the dependencies for that application without all the “fat” that makes applications slow and cumbersome. There is no need to setup separate VMs when using Docker.
Figure 4: VMs vs Containers using Docker Engine (Source)
Vehicle Software Development — Programming ROS
Most autonomous vehicle teams use ROS to control the vehicle, as stated in the previous section. If you are using ROS, it’s important to know these two tools to make your life easier:
RVIZ, PCL
Vehicle Software Development — Programming CPUs/MCUs
If you’re programming a central processing unit (CPU) or microcontroller unit (MCU) to drive your autonomous vehicle, it’s important to have these skills:
C, MISRA C, Embedded C, RTOS
RTOS stands for “Real-Time Operating System,” and is a operating system architecture that allows processes to happen deterministically, or always at a set interval without delays. This is necessary for highly reliable CPU/MCU systems because latency and jitter can be the difference between executing a life-saving maneuver on time or not.
Figure 5: Example Embedded System running RTOS (Source)
Vehicle Software Development Programming FPGAs/ASICs
If you’re programming an FPGA to drive the control system on your vehicle, it’s important to have an understanding of at least one of these Hardware Description Languages (HDL), which is a low-level programming language that lets you control hardware directly. Many of these tools are used in designing tiny, dedicated silicon chips, called Application-Specific Integrated Circuits (ASIC):
Verilog, VHDL, DSP, Cadence, Synopsys, Xilinx Platform Studio (ISE and XPS)
Velilog is a software language that allows the developer direct access to the hardware registers, thus making it a HDL. It is used to design and test analog and digital circuits for FPGAs, ASICs, and even some MCUs.
Vehicle Software Development — Programming GPUs (GPGPUs)
GPUs are getting a lot of press lately because of their ability to process images and point clouds in a massively parallel process. Often the skill that will be advertised on job offerings is “General Purpose computing on Graphics Processing Units” (GPGPU). These tools will help you do just that:
CUDA, OpenCL, OpenGL, DirectX, DirectCompute, Vulkan
CUDA (Compute Unified Device Architecture) is NVIDIA’s toolkit programmers use to program their GPUs. It’s a platform and API that gives the user direct access to all the bells and whistles of the GPU. CUDA works with C and C++ and integrates with OpenCL. If you’re doing image processing, CUDA is a must know.
Figure 6: NVIDIA CUDA Domain Specific Libraries (Source)
Vehicle Software Development — LIDAR, Radar, Cameras, Perception Sensor Fusion
How does your autonomous vehicle “see” the world? Is it LiDAR, Radar, or just cameras? Maybe something else, like ultrasonic sensors? Well, the following are the most common software tools and skills associated with hardware that lets you take what the sensor “sees” and make it useful information for the rest of the software stack:
Velodyne Development Kit, ZED Stereo Camera SDK, Scanse LIDAR SDK (shutdown), SLAM
SLAM stands for Simultaneous Localization And Mapping. “It is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it” (Wikipedia). This is the core approach to determining where a vehicle is in space and where it needs to go next. This is where you hear about those famous machine learning buzz words, like “extended Kalman filter.”
Vehicle Software Development — Machine Learning, AI, Deep Learning
Here’s where all the crazy research is happening. If you’re into AI, Machine Learning, or Deep Learning, you’re probably intimately familiar with these tools. If you want to be in this space, you better start learning, because creating the algorithms necessary to do much of this from scratchncan be quite the task!
TensorFlow, Keras, Torch/PyTorch, CAFFE, Apache MXNet, Theano, CNTK
TensorFlow is an open source software library developed by Google that’s become the de facto standard for leveraging Machine Learning and Neural Network algorithms. It can run on GPUs and CPUs. TensorFlow creates easy-to-visualize stateful dataflow graphs (shown below), that are referred to as “tensors,” hence the name.
Figure 7: Example TensorFlow Playground (Source)
Software Test Frameworks
So, you wrote all your software, how do you make sure it works. Many times, engineers will build out their own “Unit Test Frameworks” to feed in simulated data to their software and check their answers. They stress all of the corner cases to make sure nothing breaks. Here’s a few tools that the industry uses to make this easier:
Mocha, SysML, Jasmine, Jest
Game and Physics Engines for Simulation
Here’s another super popular area for jobs. Many times, engineers want to see their software run on the autonomous vehicle as a simulation. They want to physically see the car driving in an environment, simulated. This is a challenging thing to do, take data variables and make them into pictures and moving images. It’s even more complicated to simulate the physics of the real world. These powerful tools allow engineers to make these simulations and test our the software they create with life-like visuals:
Unity 3D, Unreal Engine, CryEngine, Lumberyard, Bullet, Havok, PhysX
Unity 3D is a gaming engine for 2D and 3D visualizations. This is often used for video game development, but is very applicable for real-world simulations, since it includes a “physics engine.” This physics engine let’s you create real world scenarios and test out vehicles without solving for all the math involved at every point. It also helps create representative visuals of how the vehicle will navigate the road.
Figure 8: Unreal Engine Car Configuration (Source)
Vehicle Communication Protocols
These aren’t so much software tools, as specific protocols you should be aware of if you’re programming the vehicle. There are a bunch of tools that let you see what is being sent back and forth along the communication interfaces to the specific parts of the vehicle that will be discussed later, but you should be aware of the various protocols that exist:
CAN, LIN, FlexRay, Ethernet (Automotive Ethernet), SPI/I2C, TSN (Time-Sensitive Networking), TCP/IP, WLAN (Wifi), Bluetooth, 5G, Cryptography Primitives and Cryptoschemes
CAN (Controller Area Network bus) is a 2-wire digital communication protocol that is standard in the automotive world. Almost everything that is controlled by the vehicle’s onboard computers communicates via CAN.
Data Storage
You should be familiar with methods to interface to the data storage onboard the vehicle, namely:
Redundant Array of Independent Disks (RAID), Network Attached Storage (NAS)
RAID is a data virtualization technology that let’s you increase the speed of writing/reading from large chunks of memory and/or creates redundancy of the data in memory. It is necessary to employ when dealing with large amounts of data that are stored locally that also cannot be susceptible to read/write errors.
Databases
So once you take that data off the hardware onboard the vehicle, you need a place to put it. These softwares help you do just that, by managing all of the data in an organized database that has a certain way it records and extracts data:
HBase, NoSQL, MongoDB, PostgreSQL, SQL, MySQL, DynamoDB,HDP, Cloudera, EMR, Cassandra, Vertica
NoSQL is a non-relational database used for cloud storage and retrieval of data. NoSQL is often used instead of SQL due to it’s performance and ability to handle large amounts of data in real-time.
Figure 9: NoSQL vs SQL from Intellipaat
Streaming technologies
Stream processing is a computer programming architecture or paradigm that allows developers to process large chunks of data in parallel, with the help of multi-core computers, GPUs, or even FPGAs. This technology is necessary for speeding up data crunching dramatically.
Apache Kafka, Storm, Flink, Spark Streaming
Apache Kafka is a common streaming technology written in Java and Scala used for high throughput, low latency handling of real-time data. Since performance is so key with streaming platforms, many users also employ monitoring applications alongside Kafka.
Batch technologies
Similar to streaming technologies, batch technologies provide a high performance methodology to handle large sets of data. The difference is that batch technologies distribute data among multiple hardware resources and solve them in parallel. This is particularly useful for redundancy and reliability
Apache Hadoop, MapReduce, Apache Spark, Hive, Presto, Impala
Apache Hadoop is an open source series of software libraries that assumes that hardware failures are common, so employs a strategy to distribute data sets among multiple resources, typically on a server farm.
Here is a good simple article on Batch vs Streaming Technologies by Gowthamy Vaseekaran.
Serialization
Serialization is a method for taking large chunks of data and processing them one-by-one, typically to be stored or analyzed. This is useful for piping data between applications.
Avro, Parquet, JSON
Vehicle Test — MIL, SIL, HIL, In-Vehicle Test
Test Engineering roles are plentiful, and that’s because it’s not easy to make sure everything works all the time on the autonomous vehicles. But there are tools that do speed up the process. These software tools provide environments, templates, and architectures for validating anything on the vehicle, whether it’s just on an engineers bench or in a high-volume manufacturing setting:
NI LabVIEW, NI TestStand, NI VeriStand, dSpace HIL Simulation Systems, dSpace RTMaps, Proemion PETools CAN Tools, Vector CANalyzer, Vector CANape, Vector Capl
dSpace HIL Simulation Systems are a software and hardware suite of tools for taking software, models, or hardware, and generating inputs and outputs to that “thing” under test. This can be simulated as fake data in software, or even real-life electrical signals and physical inputs/outputs for HIL. dSpace is so popular in this space because they have been in the automotive space for so long and are intricately tied to automotive ECU design.
Figure 10: National Instruments HIL Architecture (Source)
Data Visualization and Analysis
After all that test and simulation, you have tons and tons of data in different formats. It’s quite the task to consolidate that data, put it in a format that’s easily consumable, analyze it to determine what was going on, and make intelligent conclusions about what to do next based on that analysis. There’s a few tools available to help with this today, and there are some new tools on the horizon 😊
Microsoft Excel, NI DIAdem, Splunk, Datadog, Logz.io, ELK Stack, Looker, Tableau
Microsoft Excel is used by everyone, and many times when it shouldn’t be. There are often better tools for aggregating and visualizing data, but people tend to use the devil they know…
Web Services
Cloud computing platforms are all the rage these days,and for good reason. People no longer have to set up servers in their offices, which are large and costly. Now you can rent server space online through any one of the following cloud computing platforms. Some are even specially tailored for machine learning!
Azure and Azure ML, Google Cloud and Google AI, Amazon Web Services (AWS)
Amazon Web Services (AWS) is an on-demand cloud computing platform. It is so popular beacuse it is extensible from individual use up to large corporation use.
Source Code Control (Others)
In addition to Git, which has already been talked about, these two source code control tools are the next most-common:
Perforce, Subversion (SVN)
Issue Tracking Products
Jira is by and far the most common issue-tracking tool. It is used for bug tracking, service requests, and project management. It is a paid product.
Requirements Management
When anyone takes on a problem, they first document what will constitute successful accomplishment of that task. Those are considered the “requirements” of that task. When the problem gets big and challenging, there are often tons of requirements that must be fulfilled at certain times, and at times at conflict with one another. That’s why there are softwares to help manage that process.
Rational DOORS, JAMA, Rhapsody
IBM’s Rational Doors (Dynamic Object-Oriented Requirements System) “is a group of requirements management tools that allow you to capture, trace, analyze and manage changes across the development lifecycle” (Source). Over time it has become the standard for large, complex software project requirements management.
Other Automotive Topics
Here are a few automotive topics, regulations, and standards you aught to be aware of, just to make sure you can speak intelligently about vehicles in general:
Automotive SPICE, SixSigma (DFMEA, HARA, FTA, FMEA), AUTOSAR, ASPICE, ISO26262, other Standards (ISO, IEEE, ANSI, ASTM, SAE, NHTSA, etc)
AUTOSAR (AUtomotice Open System ARchitecture) is a conglomerate of companies that came together to standardize the software architecture of ECUs. This made it possible for Tier 1 and even OEMs to develop their ECUs and have them work with standard tools for test and servicing.
Figure 11: ECU Design with AUTOSAR (Source)
Conclusion
That’s a lot of softwares to wrap your head around. It would be silly to try to learn them all. Simply pick the area that you would like to pursue and get really good at at least one tool from that area. The beauty of the roles at autonomous vehicle companies is that they’ll often use the clause “or like software/skill,” meaning that as long as you know one of the like tools your skill will be transferable.
Where do you go to learn the tools? That depends on the nature of the tool. Open source software typically has a fantastic community and many of their users are self-taught. Proprietary tools can be learned through university or continuous education programs. Some tools you’ll just have to learn on the job.
There are some great online programs, namely Udacity’s Self Driving Car Nanodegree Program (Paid, two levels available) and MIT’s Deep Learning for Self-Driving Cars (free, self-paced) program by Lex Fridman.
Let us know what we forgot, and happy coding!
What Software Do Autonomous Vehicle Engineers Use? Part 1/2
| What Software Do Autonomous Vehicle Engineers Use? Part 2/2 | 36 | what-software-do-autonomous-vehicle-engineers-use-part-2-2-10512e2fc434 | 2018-06-25 | 2018-06-25 18:44:27 | https://medium.com/s/story/what-software-do-autonomous-vehicle-engineers-use-part-2-2-10512e2fc434 | false | 3,010 | null | null | null | null | null | null | null | null | null | Self Driving Cars | self-driving-cars | Self Driving Cars | 13,349 | Jason Marks | Founder of Olley, Accelerating the Mobility Revolution | a1d01be9b8f2 | olley_io | 119 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 92acce941089 | 2017-10-25 | 2017-10-25 09:45:30 | 2017-10-26 | 2017-10-26 11:12:36 | 3 | false | en | 2017-10-26 | 2017-10-26 11:48:42 | 6 | 1051e1d3ac6c | 4.961321 | 0 | 0 | 0 | “What virtual reality does is force you to explore and discover” | 5 | Virtual Reality: Do You Really Need It? This Will Help You Decide…
“What virtual reality does is force you to explore and discover”
About a week ago I attended “VR Augmented And Mixed Reality Event” as part of #SPIFFEST2017 (San Pedro Film Festival) and it was an eye-opening experience to hear and see what was being said about what’s controversial in the world of technology.
Aside from having the privilege of collaborating with an innovating local film festival in my town was looking forward to learning about virtual reality as I missed it last year. Overheard technology and innovation was coming to town last year and glad I was able to attend this year.
I share this blog with the intention to spread awareness as main talks were about increasing familiarity and educating people. Whether you’ve heard of virtual reality / artificial intelligence or not it’s becoming more common as technology expands.
Virtual Reality Technology
Is mind-blowing however there are many knots to untangle as far as responsibility content. Stats show in the United States alone it’s harder than other countries to teach responsibility and ethics. In other parts of the country for example Europe or Africa its acceptable due to culture differences to be “naked” for example while here in the United States it’s not acceptable and would be described as indecent exposure and you might get fined so in other parts of the world responsibility of content does not apply.
It Boils Down To Society And Culture
Artificial Intelligence
Has it’s own “chatter” language we don’t quite understand and that’s alarming. Facebook had shut-down an experiment after two artificially intelligent programs seem to be chatting to each other in a bizarre language only they understood. I don’t know about you but to me that is creepy!
Curious? You Can Read Forbes Article Here
Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique…www.forbes.com
Today’s twitter popular trend was about a sarcastic robot called “Sophia” and became the first bot in the world to be recognized with Saudi citizenship!
Here’s what Sophia had to say:
Twitter Post 10/26/2017 @NotthatkindofDr
I found it funny at first but then said Whoa! Is this what robots are being programmed to say?
So that brings me to talk about the responsibility of virtual reality. Sophia is just one of many first robots designed (let’s not go into the dark side of sex robots) That’s another blog!
Accelerate learning is not about censorship it’s about choices. It’s using power for good not evil. For achieving big goals together. Ethics and morals.
Accelerate Learning
Is the most advanced teaching and learning approach used today. It’s a system used for both speeding and strengthening the design and learning process. It’s not used to censor but to be responsible. Know when boundaries have been crossed because when it’s in assimilation it doesn’t know. (The enactment of testing robots) Yes! robots don’t know just as much as our brains can’t tell the difference between the truth and a lie
Did you know?
Recently Facebook CEO Mark Zuckerberg got into a big boo-boo with Puerto Rico for his recent virtual reality tour and apologized for offending anyone with his video.
You can read Mashable article here
Mark Zuckerberg apologizes for that awkward VR tour of Puerto Rico
Mark Zuckerberg has apologized, in a Facebook comment, for his recent virtual reality tour of Puerto Rico. The Facebook…mashable.com
So just because we can use virtual reality should we do it?
That all depends how it’s being used, what part of the world and if it’s making an impact. Something to think about.
As we move forward we need to think what’s okay to do and what’s not okay? (digital universals)
Hollywood is still not ALL-IN with virtual reality however there are some films that are brave such as the movie “Jungle Book” and Oscar nominated “Pearl”
Check them out …
Disney Launches 'Jungle Book' VR Experience
Yesterday The Walt Disney Studios released an interactive 360-degree video and virtual reality experience tied to the…www.awn.com
Watch PEARL: The First VR Animated Short To Earn an Oscar Nomination | Geek and Sundry
While the concept of virtual reality has been around for decades, it's still a relatively young medium for filmmakers…geekandsundry.com
There’s momentum in virtual reality. Bloggers can be captivating, get engagement and dialogue started on V/R. Initiate conversations regarding rules and what’s appropriate. Virtual reality is good for non-profits, artists, painters and story-telling experiences.
There’s a great demand with VR and Augmented reality (AR) in training tractor trailer drivers. UPS enhances driver safety training with virtual reality.
Curious? See Article here:
UPS Enhances Driver Safety Training With Virtual Reality | Virtual Reality Reporter
UPS Enhances Driver Safety Training With Virtual Reality VR experience to debut at nine U.S. Integrad® facilities this…virtualrealityreporter.com
Reason why many filmmakers are not jumping the VR reality technology train is due to lack of (ROI) return on investment.
Many quoted:
“If I’m not generating anything out of this then I’m not going to invest in it”
It’s risky business for filmmakers however I believe the dilemma is they’re afraid to innovate and lose control how we see films as virtual reality explores 360 degrees and filmmakers don’t like to expose everything in order to capture the imagination.
Innovation is pretty tricky to what’s tried and true and what’s working now yet that will leave you stuck in the mud as we are experiencing changing times. Get rid of fear.
What Prevents People From Virtual Reality?
Hardware, Software and Content…
Reason why many don’t invest is the cost. It ranges from $2,000 and up as you need a compatible virtual reality headset that requires a desktop computer. A laptop doesn’t do the job as it doesn’t have graphics and hardware powerful enough to run a VR headset.
As innovation grows I believe it will become less tedious to purchase the equipment and system in order to facilitate the art of virtual reality. We are in beginning stages and people are just getting their feet wet.
There’s not much content out in social media. Bloggers are starting to spread awareness and we have some of the intelligent minds such as Elon Musk, Bill Gates and Stephen Hawking warning us about the dangers of artificial intelligence (AI)
Whether AI or AR (virtual reality) this blog was written as a glimpse of awareness on what to expect more of in the following months, years… as technology thrives.
I was honored to be at this event and work this fascinating paid film project as a blogger in behalf of San Pedro Film Festival and look forward to attending and becoming more self-aware of technology within the following years…
Sharing is fascinating… Please feel free to share with your friends
“Technology is entrepreneurial you can learn a lot from it’s success and loss”
Laura Ramos / Branding Fascinista
| Virtual Reality: Do You Really Need It? This Will Help You Decide… | 0 | virtual-reality-do-you-really-need-it-this-will-help-you-decide-1051e1d3ac6c | 2018-01-24 | 2018-01-24 12:39:31 | https://medium.com/s/story/virtual-reality-do-you-really-need-it-this-will-help-you-decide-1051e1d3ac6c | false | 1,169 | It’s A Fascinating World | null | networkchikie77 | null | Laura Ramos | laura-ramos | BRANDING,MARKETING,SOCIALMEDIA,BRANDS,HOLLYWOOD | imlauraramos | Virtual Reality | virtual-reality | Virtual Reality | 30,193 | Laura Ramos | Branding & Marketing Fascinista, Harvard Life Coach, Visualization Storyteller, Blogger, Philantropist 🌟https://about.me/imlauraramos | f9c18ff09467 | imlauraramos | 310 | 392 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | f898f5443d54 | 2018-06-11 | 2018-06-11 15:20:46 | 2018-06-11 | 2018-06-11 16:59:53 | 1 | false | en | 2018-06-11 | 2018-06-11 17:08:27 | 2 | 105243a94df3 | 36.928302 | 0 | 0 | 0 | How does AI work? When does AI break down? Will artificial neural networks start to resemble biological neural networks (animals’ brains)… | 4 |
[PODCAST] Episode 4: Understanding AI Technology
How does AI work? When does AI break down? Will artificial neural networks start to resemble biological neural networks (animals’ brains)? We go ‘Beyond The Hype’ with Dr. Janet Bastiman, who provides an accessible introduction to the workings, limits and future of AI technology.
[00:03] Janet
You may end up with an AI that is racially biased without intending to create one.
[00:08] David
Welcome to the MMC Ventures Podcast. We’re going beyond the hype in Artificial Intelligence.
A warm welcome to listeners. I’m David Kelnar, Partner and Head of Research at MMC Ventures, the insight-led venture capital firm based in London.
In this six-part series, we’ll be hearing deep insights from some of the world’s leading AI technologists, entrepreneurs and corporate executives — while keeping things accessible for the non-specialist.
I think AI is today’s most important enabling technology, but it’s not easy to separate fact from ction. My goal for this series is for us to come away better informed about the reality of AI today, what’s to come, and how to take advantage.
I’m excited today to speak with Dr Janet Bastiman, Chief Science Officer at StoryStream.
Janet’s going to provide an accessible introduction to AI technology, while describing its capabilities, its limitations and its likely evolution. Janet will also guide us on how to build great AI teams, and how companies can successfully make AI work in the real world.
Janet is a rare leader who combines deep technical expertise in AI with years of experience building AI teams and commercialising AI technology. Janet has a PhD in computational neuroscience and
a Master’s degree in biochemistry from the University of Oxford, where she was also president of the University biochemistry society. She regularly contributes technical articles in the field of AI to leading publications, and blogs about mathematics and technology at janjanjan.uk.
Janet is also one of the U.K.’s leading C-level AI executives, with many years of experience shaping technical strategy, building and leading technical departments and managing processes of technical improvement. Before joining StoryStream, Janet was Chief Science Officer and CIO at SmartFocus, and prior to that Janet served as CTO at SnapRapid. Janet, thank you for sharing your experience with us today.
[02:01] Janet
Thank you.
[02:01] David
This series is sponsored by Barclays. I asked Barclays for a strapline they’d like to include as sponsor and I thought their response was really interesting. “Thanks. I’m not sure about slogans. Here’s just how we think about AI. We think AI is incredibly important — a whole new field that’s as significant as anything that has gone before. And we think about it a lot. We think AI is vital to our business and we’re working hard to take advantage of it for our customers. And we need to learn from, and collaborate with, a wide range of people to ensure success. Technology advances fastest not when it’s held close, but when people go out, listen and contribute.”I thought that was better than any slogan, so I asked if we might run with that. I have pleasure in doing so.
Janet, Let’s start by discussing AI technology. Could you explain, for the non-specialist, how modern AI — which we usually refer to as machine learning — differs from the basic kinds of AI we’ve had for decades, and indeed from traditional, rules-based software?
[02:58] Janet
Traditional AI is a computer system that can make a decision that appears to be intelligent on specific inputs. And as computational efficiencies increase, we’ve been able to do very complex things through decision-based trees.
More recently, machine learnings evolved. Now here, rather than the programmers deciding what value to put on all those inputs in order to get an intelligent output, we let the computers themselves decide that based on showing them from inputs what the outputs need to be. And with this, we can do far more complex problem solving than you could with traditional AI, because you don’t need to know how to solve the problem — you absolve that on to the computers.
[03:40] David
So, it’s about enabling software to learn through training and to kind of self-optimise instead of following traditional sets of rules written by people?
[03:50] Janet
Absolutely. And because you’re taking the human out of the equation, you can solve far more complex problems that you just can’t conceptualise — because we just can’t think in that many dimensions.
[04:03] David
There are more than fifteen types of machine learning. One kind of machine learning, so-called deep learning, gets a lot of attention, but we know that a lot of other forms of machine learning are actually more suitable in a lot of contexts. Could you give us an overview of two or three of the most popular machine learning techniques, and explain the approach they take?
[04:24] Janet
Absolutely. Deep learning is definitely the buzzword of the day. But you’re quite right, there are lots of different variants on AI and machine learning. And I always think of them as just different tools in the toolbox. So, some of the ones that people might be aware of: general adversarial networks. You have two networks that are in competition. But in that you have one that’s creating something and the other one is trying to work out what’s artificially created by the first network and what’s real. And they’re contesting against themselves. And by doing that both of them get better. The first network gets a lot better at creating something and the second network gets a lot better at spotting fakes.
[05:03] David
Presumably, because both are operating at the speed of software, if you like, that process of iteration is pretty rapid?
[05:10] Janet
Yes, absolutely. Absolutely and that’s a great one. That’s been used in a lot of things, particularly in some of the art creation algorithms that are out there. And then you’ve also got Bayesian inferencing.
So, what you’re trying to do with Bayesian approach is work out, for a single data point, which
category it’s likely to be in. So, if you think of a scatter plot, you should be able to draw a line in between the data and say everything above this line belongs to category A and everything below belongs to category B. And sometimes that’s not obvious just by looking at the data, so you have to transform it — maybe look at different mathematical transforms. So, look at it in polar coordinates rather than standard XY, and then you start see different patterns in clusters and then you are more
able to draw the line. Once you have that, you can then look at any new data and decide where it fits with relative confidence.
[06:07] David
So, often Bayesian approaches are about classifying data. What are the usual goals for some of the other machine learning approaches we talked about? Do different algorithms have different kind of fundamental goals?
[06:19] Janet
Classification’s definitely the biggie. A lot of the work has been done in working out whether… what label to apply to something, what sentiment a piece of text has. But then if you look at more complicated problems like, predicting traffic flow in a city, where you’ve got far more variables and a lot more variability. Then you might need to take a more abstract approach than just the raw data. And that’s where some of these other things come in. And even like the adversarial networks, in terms of creativity from the art world or when you’re looking at some of the gameplayers like Go and chess and even some of the 8-bit computer games that have been done, you need a slightly different approach because you need to force the model to go down a different route to be more adaptable.
[07:06] David
We’ve talked about some of the different tools for machine learning, and how they works. But of course, machine learning isn’t a panacea is it? It’s not, a solution to every problem. What sorts of problems is machine learning well suited to and where does it break down?
[07:20] Janet
Okay, so, image classification is the traditional one. That was when it really had its breakthrough
– the classification problems and understanding objects…it’s very well suited to that. It’s also well suited to absorbing large amounts of data and extracting meaning from that. And things like filling in video where there’s missing frames. Anything where there’s known quantities, and you’re trying to get from A to B, it’s very, very good at.
Now even within that space, it breaks down very quickly if it’s not being created properly and you see this in a lot of the image classifications all the time. That changes to an image that maybe a human can’t detect or doesn’t really notice, like a minor layer of static that doesn’t make the
image look any different, will completely fool a deep learning network and it will come out with an incorrect classification. And similarly, if you put random patterns through them they can come out with all sorts of crazy answers. And you get the same sorts of things if you put nonsensical text in. So
– anything that’s outside the boundaries of what it knows, it will break down very quickly.
And if what it has been trying to do is too narrow then you’ll end up with a problem called overfitting. Where as soon as you get something that’s ever slightly different, it will come up with something nonsensical. And quite often it can only tell you about what it knows, so if you have a network that’s been trained on classifying, let’s say, football players and which team they’re in, and then you show it a poodle it will tell you the closest football team that it thinks it matches that poodle. And that’s one of the biggest problems we have — this specificity of the networks.
[09:12] David
It sounds like machine learning is very good at classification, and finding subtle patterns in data. Apart from over-fitting, what other problems arise with machine learning — and in which domains is it less effective?
[09:28] Janet
Well, pretty much a lot of things that we’re quite bad at without gut instinct. So I mean, if you think
of the stock market I’m sure that there’ll be many people who would be really keen to have an AI that could predict the future trends the stock market…
[09:41] David
I’m guessing one or two!
[09:42] Janet
But there are so many variables that go into that, from weather patterns affecting availability of raw materials, to scandals that might happen about the leadership team or you know, data loss issues or even who else is buying or selling shares.
[10:01] David
So almost infinite…
[10:02] Janet
Yeah, you’d have to know pretty much everything that was going on with everybody connected with that business and all of the inputs to that business and all of the other relevant businesses. And it’s a remarkably huge problem. Now whether it’s not solvable yet because we don’t have something big enough that can hook into all those things in real time in order to do the predictions or not, I don’t know. I suspect that’s the case but it’s such a complex problem that you can’t break it down into something definable.
[10:31] David
The dynamic you’re describing seems to be that machine learning is only effective when it’s making decisions in relation to systems that are wholly described by the available data. And if we can’t provide data that wholly describes a situation, it’s going to struggle to get the kind of results we want?
[10:48] Janet
Yes. And the amount of struggling will obviously on how much there is outside of what we’ve described. So, the problems where it’s been very successful have been 100% described; we know what the boundaries are. As soon as you start going outside of those boundaries then you get problems. And if you think of the difficulties that we’ve had with autonomous vehicles in natural environments…as soon as you…you might tell the vehicle that can… can drive well in a nice, safe, tested environment but as soon as you start putting in pedestrians and cyclists and pigeons and all these other sorts of things going on it becomes a much more complex problem.
[11:31] David
This relates to an example I saw. I can’t remember the year of the study. It was the University of Pittsburgh, it was a decade or two ago — quite early in the life of AI. And it was evaluating the efficacy of an AI system that was designed to prioritise which patients in a hospital received care and whether they needed escalation. And the system, the machine learning system, recommended that people with asthma didn’t need as much care. That turned out to be wrong. And the reason it was wrong was because it didn’t know, because it didn’t have the data, that actually people with asthma tended to get more care elsewhere. So, the data just suggested that they did better but that was due to a wholly other reasons that the system didn’t understand. Is that the kind of… difficulty?
[12:13] Janet
It’s exactly that. In that when you’re gathering the data if you’re just looking at their hospital admissions…you’re missing out on why do you have those numbers. And they’ll be skewed numbers, but why do you have these skewed numbers.
And similar things have happened in the US, with using AI to influence sentencing. That’s exactly the same thing. You have a questionnaire which, while it’s not specifically asking race related questions, some of them are correlated. So you may end up with an AI that’s racially biased without intending to create one. And that’s where a lot of the testing really needs to come in. And you see it time and time again that something tested in the sterile environment of a university or even in industry, in the development area, when you get real data in you’ve not accounted for these variables and you end up with significant problems in some cases.
[13:17] David
What do you see as the key challenges or limitations with today’s machine learning capabilities. And how might they be solved?
[13:26] Janet
I think the key one is that we train something for a specific purpose and that’s all it can do. And we’re now starting to see being able to transfer those abilities into very similar problem spaces.
But generally you find that as you train something to something else it forgets how to do the first task you set it, or it just becomes very bad at it. Whereas we’re quite good at learning multiple skills and transferring those skills around. And when we crack that then we can have more generalised intelligence.
[14:00] David
And this is the issue of transferability, as it’s often known?
[14:02] Janet
Yes. So there’s a lot of work being done on transference learning, which is the field, and It’s getting there. There are big improvements. But it’s still very narrow. So, the difference between me being able to play a first-person shooter computer game and then playing a problem-solving computer game, I still need to use the controls in the same way but how I’m playing the game is very different. And we don’t have that adaptability yet.
[14:31] David
Why does that matter? The idea that we need entities to be able to do lots of different things fairly well seems a human approach to the world. Couldn’t we bundle up lots of algorithms, each of which are good at doing one thing, but together can accomplish the range of tasks. When looking for transferability, are we… anthropomorphising a bit here? Why can’t we just use bundles of algorithms?
[14:57] Janet
It may well be anthropomorphic. But if you look at the problems we’re trying to solve and we want, for example, autonomous vehicles, we need them to react well to unexpected events in the same way that we would if something runs out and it reacts. And understanding the difference between being at a junction and waiting almost infinitely for a gap that’s absolutely perfect or just accelerating a little bit more than normal even if it’s only one mile an hour just to make a gap that’s there. That requires a level of creativity. And even if you had a bundle of different algorithms, you’d still need something at the top level controlling all of that and putting it together making the final decision and that’s the difficulty.
[15:46] David
It’s decision making?
[15:48] Janet
Yes…
[15:59] David
… at some level, it’s the fact that the reality is most real-world situations don’t fit neatly into little boxes of discrete tasks. It’s the balance between the coordination between them and when to employ which?
[16:00] Janet
Absolutely. And, you know, as we push out into our solar system, we’ll probably want to be sending AI robots into places that we don’t want to send humans for safety reasons. And they’re going to need to be adaptive and creative.
[16:13] David
And hence transferability that can handle them?
[16:14] Janet
Yes.
[16:15] David
Beyond transferability, what are some of the key — perhaps the key — challenge of machine learning today do you think?
[16:21] Janet
I think data is a big problem. We need to crack the data because you… we don’t need thousands of examples of a horse to know what horse looks like. And we can do a lot of things from one or two examples. So, understanding how we learn and how we can adapt I think will be critical. Because until we know how we do things, it’s very difficult to model.
Now that’s one approach, and that’s the approach that we’ve taken so far with the neurons. We’ve modelled them on biological neurons but they’re quite limited. But there may be a better way of doing things and even Geoff Hinton, when he first came up with it, he said, this is an approach. It’s not necessarily the only approach. But it’s working and while things are working we tend not to look for other solutions. So it may be that we’re missing out on things more efficient, more effective. In order to get very accurate algorithms we need a lot of very well labelled data. And we can do some things with smaller amounts, but nowhere near as well as we should be able to.
So, right now we have those two problems: we either need to get better at doing things with less data, or get more data so that we can be better with what we’ve got. Or potentially a completely unique approach that we haven’t thought of yet.
[17:37] David
But, sort of, something has to give…
[17:38] Janet
Yeah.
[17:39] David
We either need a lot more data, or better algorithms, or both — or something else entirely… Where is research around machine learning currently focused?I n what areas do you think we might see the greatest improvements in machine learning technology in the coming decade?
[17:52] Janet
A decade is a very long time in AI. Everything that we’ve predicted has happened a lot sooner than we think. So, I think we’re going to get some big breakthroughs in the adaptability.
I’m seeing some really interesting things particularly in robotics. Boston Dynamics are constantly releasing videos of the crazy things that their robots can do…
[18:16] David
We’ve got back flipping robots now…
[18:17] Janet
Absolutely. And just seeing how naturally the robot moves and can jump and can move around is really quite exciting. And that shows that we can build an AI that’s capable of understanding its environment and interacting with it in the same way that we do. Which is a… it’s quite a low-level brain feature for us, but it’s still a really interesting development. But I think that’s going to continue. So I think we’re going to see a lot more interactability with robotics from an AI point of view.
Obviously, the autonomous vehicles are a big thing just because it needs to take so many inputs and so quickly in order to make a decision. I think from a legislative point of view, as soon as we get that out of the way we’re going to see autonomous vehicles on the roads. I’d really like to see that in the next decade, hopefully sooner.
[19:11] David
So would my wife! She’s holding out not to get a driving licence. She’s holding out for autonomous vehicles.
[19:16] Janet
I believe my daughter, who’s six, I don’t think she’ll ever learn to drive.
[19:21] David
You touched earlier on the fact that machine learning algorithms usually require large data sets for training. To what extent do you think we will see new algorithms that change that requirement — or is that a more fundamental problem given the nature of systems that learn through training?
[19:38] Janet
I think is a fundamental problem but necessity is always the mother of invention. And if someone’s got a great idea and there isn’t the data set available they’ll find a way of doing it differently. I think it’s very easy when you’ve got a way of doing something that works and you know, okay, I just need 100,000 labelled images or, you know 50,000 paragraphs of text and I’m good to go. Then you focus on prioritising the network and the weights and getting it as good as you can. You don’t think about other solutions. But if you’ve not got that, you become quite creative. And I think we’re going to need to see some new problems, that people are going to need to be creative from, then we’ll get the killer ideas come out.
[20:22] David
Let’s talk about deep learning, one of the most exciting and productive areas of AI in recent years. Deep learning is one kind of machine learning. And it involves the creation, in software, of so-called artificial neutrons. And artificial neural networks that replicate, somewhat, the function of a human brain. Could you briefly explain for the non-specialist how deep learning works?
[20:44] Janet
Okay, so, at an input level, you’re taking your raw data. And the neurons that we model will have a number of inputs. And each input will have a weight associated with it to say how important that input is. And then the neuron itself will combine all of those inputs and weights to give a single signal output. Which it will then pass onto one or more neurons in the next layer.
So, you create layers of these neurons and each takes all the data that you pass to it from the layer above. And it will then look at all the weights and decide what signal it sends to the next layer.
The important things for the neurons, the thing is that they respond to changes from layer to layer. And the way I picture it, it’s like… it’s like an old-fashioned bagatelle board with the pins. And you drop the marbles down, and what you’re doing is moving the pins around so the marble, at a certain place at the top, ends up in the correct bucket at the bottom. But if you can imagine that in multiple dimensions, that’s kind of what you’re doing by putting the neurons and training their weights.
[21:48] David
You’ve talked about neutrons and layers in deep learning. Can you clarify for listeners the difference between deep learning and other forms of machine learning?
[21:59] Janet
Fundamentally, the difference between deep learning and machine learning is the number of abstract layers that the network has. And that’s what makes it deep. As soon as you’ve got more than one abstract layer it becomes deep.
And one of the most difficult things in deep learning is getting the architecture of your network right. How many neurons do you need? How many layers do you need? What types of neurons you need? And that is, in itself, a bit of a dark art. And you start off with a gut feel based on published networks that have been very successful. And you might start with one of those, and then if that’s not giving you the results, you start to play around with the types of networks and think, well, actually, there’s overfitting here so I need to do something about that. I need to make sure that I’m actually learning something that’s relevant to the image, rather than just my training set, so actually I might add a few more levels.
[22:55] David
How widely are deep learning techniques employed beyond the areas of computer vision and language, where I know they’ve been so impactful?
[23:03] Janet
They’re quite pervasive actually. It’s just that not all companies are shouting about them and not necessarily all companies realise they’re using them. They’re there, working away in the background. So…if you think of, obviously, a smartphone, it’s got voice recognition on it. It’s got all sorts going on behind the hood and, even my phone, it knows where I’ve parked my car even though I’ve told it not remember that, just because it knows that I’ve finished driving and then suddenly I’m traveling a different way. But beyond that, it’s also learned where I go regularly. And even though I don’t have a calendar appointment in my diary, it suggests the traffic time to places that I go frequently even if it’s not every week or on the same time every week. And it’s things like that. It’s gradually becoming more and more pervasive so we have it overtly. If you load something to Instagram, it might suggest some tags for you. And we know it’s there in Siri and Alexa and a whole host of other things. But it’s also starting to be built in fundamentally and you see it in social media, where Twitter and Facebook will promote things that it thinks you want. And you won’t necessarily see everything because they know this too much. So, it will quietly filter out the things that you’re not interested in and that you’ve not responded to. And that’s all part of the machine learning algorithms going on in the background.
[24:23] David
Your PhD was on the fascinating subject of the differences between biological neural networks and artificial neural networks. And when, you and I met for coffee a few months ago, you described that the way artificial neurons interact with one another is actually very limited compared with the way that biological neurons can interact. And I was intrigued. So, could you tell me a little bit about that and how might apply some of the learnings we have about biological neural networks to artificial ones?
[24:52] Janet
Okay, so artificial ones are very, very simple models of neurons — in that you have a number of inputs that go into a central place, just like the cell body of a neuron, and then you have a single output, like from the axon of a biological neuron. And that is fairly fundamental to how neurons work.
However, the connections between neurons in the brain are chemical rather than electrical. So, you have the problems with a repeated signal that can get lost because the following neurons runs out of the ability to receive that signal — because it just doesn’t have the chemicals to start a new action potential. And this is why we end up being unable to, you know, if we stare at a colour for a long time you sort of lose the ability to see that colour for a few seconds until everything comes back. So that’s one thing. We’re not modelling the chemical synapses at all. We’re just using electrical signals. And if we started modelling those that could give us some level of on-the-fly adaptability. Now whether we’d want to know that or not is a different question — but I think there are some problems where that might be useful.
Furthermore, in our brains neurons are not these flat networks. They’re packed in very densely, in three-dimensional space, and while you don’t necessarily get the electrical signals crossing over to neurons that are not connected to, the neurons themselves can release small molecule chemicals like nitric oxide, which can affect neurons in the close vicinity — even if they’re not in the same direct pathway. So, understanding how the neurons are packed together could also give us a way of tuning the networks differently. And these are all things that can be modelled. And we’ve got the processing power to do that. It’s just a question of writing different modules to do it.
[26:47] David
And do you think that artificial neural networks will become more like biological neural networks over time?
[26:55] Janet
I think it depends on the problem. I think we will have some new type of network which will be more biological but there’ll be diminishing returns for some problems. I think for situations where we have networks that are working really well it will probably just make them slower and not necessarily any more effective. Whereas for problems that we’re struggling with right now it might be a different approach. And without doing the experiments it may or may not work — but it might well be a valid option.
Deep learning itself isn’t the solution to all problems. What are the challenges or limitations with deep learning approaches today?
[27:29] David
Deep learning itself isn’t the solution to all problems. What are the challenges or limitations with deep learning approaches today?
[27:36] Janet
I think the biggest challenge is not so much the technology — it’s how we’re approaching using it. Where we’re expecting it to solve all of our problems, and it’s very much the saying that I will
throw… throw an AI at it that will solve the problem. But unless you understand the problem you’re trying to solve it’s not going to work. You need to think of AI as being almost an employee or an intern, someone that you can offload something to. And you wouldn’t employ someone without having a task defined for them to do. And if you treat AI in the same way then it can be very, very effective. However, if you just throw data abstractly at something and expect something to come out, it’s never going to happen. And similarly, if you limit what you give to the technology to something that’s not relevant to what’s going on in the outside world, then you’ll end up with the wrong answer.
[28:25] David
So, the data is kind of necessary but not sufficient?
[28:28] Janet
Yes.
[28:29] David
Let’s talk about explainability. When using deep learning AI — which uses artificial neural networks
- explainability is often cited as a difficulty. Deep learning algorithms work brilliantly for a range of problems, but we can’t always understand why the artificial neural network produced the recommendation it did. And that matters when the algorithm is making a decision regarding, for example, a mortgage agreement or a loan. Do you think the problem of explainability, will be addressed?
[28:59] Janet
I think it will be addressed. There is a Select Committee ongoing at the moment, that I submitted evidence to, about algorithmic transparency and the importance of understanding that if a decision has been made by machine how it came to that decision. Now the problem with deep learning
is that the abstraction from the original input data to the output answer is at such a level that it’s immediately mathematics that would not be followed by the layman. So, I think the only way we can get around that is to be very clear on how the network was created, the justification for that, the training data used and the accuracy in terms of precision and recall that it gets. So that the person being affected by the decision has a good chance of understanding why.
[29:49] David
From a practical point of view today, explaining in excruciating detail how a database or a set of algorithms work that are used to give a loan decision wouldn’t actually be helpful to most people — and that’s before we involve machine learning. So, is the problem here just a matter of degree? Is the complexity of decision-making systems today, whether involving machine learning or not, such that we just need to get used to the idea that the most we could really ever say to someone is, “we use this type of machine learning architecture, it drew on data sets including X and Y and Z, and it took those into account to give a decision.” Is that enough do you think?
[30:28] Janet
I think we need transparency of understanding of the accuracy. So are we confident that the algorithms in the deep learning are making the correct decision? And it’s often easy to say it’s ninety five percent accurate, and that sounds like a lot…
[30:45] David
Not if you’re getting a cancer diagnosis…
[30:47] Janet
No! And I gave the example the other day. If I said, you know, you’re going to be run over by a bus one every twenty times you cross the road, you’d take a cab everywhere. But for a different problem,
like if I told you I could predict the weather completely accurately but I get it wrong about one day in every three weeks, you’d be absolutely fine with that. So, the accuracy is down to the problem. And the accuracy that we accept will depend on how important the problem is. So, we can’t have this blanket, it must be this percentage accurate. Because it’s not like.. it’s not like a server farm where we’re just discussing up time. It’s got to relate to the problem. And then the people using it can then make an informed choice as to whether it’s right or not.
[31:32] David
Just lastly on this. Deep learning, which is one approach to machine learning, has obviously delivered breakthrough results in quite range of areas. But do you think we’ll see a whole new approach to machine learning, beyond deep learning, that will yield yet another step change in capability? Or are developments and refinements to existing approaches more probable over the next four to five years?
[31:54] Janet
Generally, we see complete innovation when the ability from technology starts to tail off. And then someone comes up with something because they start seeing diminishing returns on the investment. And we’re not quite there yet on deep learning. It’s getting there, it’s no longer
advancing as exponentially as we’d like. So, I think possibly in the next few years we’ll see some new techniques coming in but it will then take a few years for them to overtake what we’ve got already.
[32:26] David
Let’s talk a bit about productising AI. I’d like to help listeners understand the reality of developing and applying AI to solve real world problems. Many listeners won’t have a clear picture of how in practice AI is developed and deployed. Could you pick an example use case and, at a high level, walk us through the key steps involved in applying AI to solve a problem?
[32:49] Janet
Okay. So, let’s look at an example of spotting the football team that someone plays for.
You really need to look at the problem and the best way of solving it. So, how do we know which team a football player plays for? You look at the kits that they’re wearing. And then you need to realise, well, actually there’s a home kit and an away kit, and a third kit and that’s going to change every year — sometimes more than once a year. So, there’s going to be subtle differences. The team colours might stay the same but the sponsor could change, the pattern on the shirt could change and there’s going to be a whole host of other things you might need to take into account. So, that immediately makes the problem more difficult because rather than a team you’re going to say, okay, well it’s… it’s the Man U home kit, their away kit and their third kit for this specific year with this sponsor. And Man U particularly have a training kit separate to their playing kit. So, the problem which immediately sounds simple suddenly becomes quite complex.
But then it’s still conceptually solvable.
So, firstly you have to create your dataset to make sure it’s correctly labelled. So that’s an understood problem. It’s a question of time, money, materials and you can end up with a nicely labelled data set. But you have to allow for that time and the storage of that data set.
Then you need the people and the hardware to develop a solution. Now in order to do anything with machine learning, deep learning at any scale, you’re going to need machines with very large GPU and these aren’t cheap but thanks to the gaming community they’ve become much better over the years and you need to invest in these. Every researcher needs to have at least one of their own,
if not several, because these models take time to run. So you’ll end up with your researcher, they’ll start starting looking at the different teams. They’ll start building an architecture and they’ll come up with something. Now if they do this in Python and use TensorFlow then it’s pretty easy to translate to something that can be deployed on a machine and come out with a classification.
If they use some of the other techniques — so they might use MATLAB, they might use R or something else — then you’re going to need someone else to translate that into something that’s deployable.
And that’s where the difficulty lies. And a lot of researchers who come from academia are used to using things in a certain way and are not necessarily used to interfacing with production teams. So, you may find that the way in which they’ve developed something is not efficient, because there’s no time limitation when you’re just sat in a lab and you have a number and you need to output something else. Whereas, if you’re looking at a Twitter firehose you’re really going to need to have something that’s effective and efficient and gives you the answer quickly.
You also need to take into account, how precise do you need your answer to be? Does it matter if you say that a Liverpool player is a Man U player? Some people might say no. I’d imagine the board of Man U would be very angry if you showed them a solution that had that. So, adjusting your
recall and precision for the problem set can often be an iterative process with a client. And again, not many machine learning researchers have that experience or ability. So, you need some sort of interfacing layer — someone who can speak the language of the researchers and the clients and the production team. And those sorts of people are difficult to find.
[36:31] David
How difficult is it to productise AI? That is, to move from the lab environment, with test data, to solving a messy real-world problem?
[36:40] Janet
It’s really down to the problem. Because sometimes you can get a really effective solution just using something simple in a lab but, if I’m going to autonomous vehicles, a very sterile test track to one of the roads here in London where you’ve got aggressive drivers, you’ve got cyclists all over the place, couriers, pedestrians who take no notice of the traffic lights — all of those sorts of things combined
is a really really difficult problem. So, productising something like that is far more difficult than just telling someone the tags there are in their image.
[37:21] David
You’ve described a lot of the difficulties in the process, and steps involved. A lot of the cloud platform providers — Google, Amazon, IBM, Microsoft — offer a range of hardware infrastructure and also off-the-shelf machine learning services to do a lot of this for people. And they purport to do a lot of the heavy lifting. To what extent are they a panacea? Where are the limits of that?
[37:44] Janet
Okay. Well, from a hardware provision point of view, the cloud vendors are great. Because you can scale, you can get things done very quickly, especially from a start-up point of view. You don’t need to have a huge investment in a big server farm. You can pay by the hour to do what you need to do. And they’ve all got deals for doing things when people aren’t using them, which is fantastic. So, from that point of view it’s great.
The tools they provide are also very, very useful, and if you haven’t invested in a very experienced team already — you have a smaller team — and you want to get something done quickly, they’re absolutely fantastic. You can dive in. You get something pretty good relatively quickly.
The distinction comes when you’re trying to solve a very, very narrow problem. Something that no one else has done before and it requires a difference in architecture. You might require different libraries to what they provide. You might even need to modify standard libraries in order to solve your own problems. And I’ve had situations where I’ve needed to extend TensorFlow to solve a problem specific to me and I’m not able to do that with a cloud provider because I can’t change the code that’s on their systems.
So, it depends on the problem that you’re trying to solve — how generic it is or how specialised it is and the resources you have locally. Because if you have your local team and they’re all very
experienced then you can do things faster and more cheaply using hardware in house than you can doing it remotely on the cloud.
[39:24] David
What are the key challenges involved in productising AI?
[39:28] Janet
I think the biggest one for me is the… the accuracy and possibly the efficiency. So, talking efficiency first, ensuring that what you’ve built works in the real world is difficult. So, you need to go through testing phases and you may find that what you’ve created, even though it’s quite accurate, would require too many services, and not cost effective for what you can sell it for. So, ensuring that you have that end goal mindset when you’re developing it is essential. Because you may find that, you know, if it takes five minutes to come up with an answer that is just not going to work. So, understanding that early in the phase is important.
But on top of that the, the answer that you get needs to work in the real world and if you don’t look at real world data early then you’re never going to have something that’s productisable.
[40:23] David
So, it sounds like a key success factor for startups listening to this that are involving AI is to start testing in the real world as soon as you can. Move from the lab to the real world?
[40:33] Janet
Absolutely. Because you may find that something, you know, some of your early models that you know, might be 80% which isn’t quite what you need it to be but it might seem quite good. And then you put it out on to the real world and all of a sudden, you’re getting a much, much lower figure and you’ll have to manually test that. Because you you haven’t got the segmentation of what’s right and what’s not. But you’ll very quickly see just by eyeballing the data and how it classifies it, whether what you’ve got is working at the same level as what you think it should be or not.
[41:05] David
How can companies successfully gain access to the training data they need? Should they be thinking about data acquisition strategies?
[41:13] Janet
Absolutely. I mean there’s a lot of data out there but the copyright for the data that’s up on social media, and that you see when you do a google image search, is with the person who uploaded it. It’s not just freely available for you to use and do what you like with. There are data sets available. Some of them are licensable for industry, some of them or not. So, you need to be very aware of that and you may find that you have to create your own data or work with a party who has access to the data that you need. And I think that’s a very important starting point before you try and
solve a problem.
[41:49] David
Now it’s often noted that, in reality, AI developers spend 80% of their time preparing, cleaning, labelling data. Only a minority of that time can actually be spent applying, optimising machine learning algorithms. Do you think that’s right? And to what extent will tools be developed to automate this data preparation process?
[42:09] Janet
I don’t think that stat’s right. I think it’s one of those stats that sounds like it should be but probably doesn’t have any background in fact. Generally when you’re acquiring data and preparing it you’ll write a script and then it might take 80% of the time to process but you’re not sat there watching it. So you’ll be doing other things. You’ll be creating your networks. You might be trying things out of the subset of the data. But I think the whole labelling and ensuring that you have trusted data is key. It’s going to be difficult to automate that because in order to automate it you’re going to need something that’s clever enough to know how to label it which might be the problem…
[42:52] David
…which is the problem to solve in the first place, right?
[42:53] Janet
… so having that beautifully accurate human-labelled data is critical and there’s no getting around that.
[43:02] David
What bottleneck or barriers to productising AI today will be addressed, do you think, in the next three years — perhaps through better tools — and what difficulties will remain?
[43:13] Janet
I’m going to start out with the difficulties. Part of the bottlenecks of productising is a lack of quality assurance in the AI researchers themselves. It doesn’t appear to be something that is taught as part of any of the courses they teach. The practices of how to build networks and how to tune them…but thinking about ensuring they’re tested and efficient doesn’t appear to be on any of the syllabuses. So, you end up with people who understand networks but aren’t ready to go into industry and create things that are actually worthwhile. And that’s a huge problem. And I think even if that was change right now it’s going to take more years for that filter through. And that’s one of the biggest problems I see — that the data scientists are actually not very good at science. Which is a terrible thing to say but you shouldn’t just be creating things that work. You should be thinking how could this possibly break, and actively trying to break it, and only then can you confidently say that it works. And I time and time again I just don’t see that out there in industry. So, that’s one of the biggest, biggest problems in terms of productising difficulties.
So, what can be addressed in the next few years? I think being more intelligent about how to productise. Getting that pairing of the AI researchers and non-AI developers, who are very talented and understand the systems, and the efficiencies and better ways of programming. Get them together and almost pair programming. They’ll learn from each other and you’ll end up with a much better AI researcher and a much better developer because of it. And that transition will be smoother.
[44:56] David
And more broadly, what advice would you offer teams developing AI that you think will help them productise more successfully?
[45:05] Janet
I think it’s exactly that. Get rid of the barrier between the research team and the standard product development teams. Because so often it can become a siloed environment where they don’t talk and they don’t think that they’ll understand each other’s work. Or they don’t really care about each other’s work because it’s too different. But having an understanding of what everybody’s doing and how it fits… it’s the commercial viability. And if AI researchers can understand the commercial aspects of their work then they’ll be able to productise what they’re doing a lot better.
[45:35] David
This seems like a good time to talk about building great AI teams. To develop and deploy AI, should today’s large companies, in sectors ranging from manufacturing through to retail, engage with third party AI software providers? Or build their own in-house AI teams? Or a combination?
[45:54] Janet
It depends, really, on the problems they’re trying to solve. Because there’s a huge sense that you shouldn’t reinvent the wheel and if it’s going to take you ten researchers and a load of hardware to solve a problem, but for a fifth of the cost you can pay for an API to do it, then pay for the API to do it. So, I think a good understanding of what’s available and the costs of it compared to your in-house team.
But also, will what’s available solve the problem? And it might be that it does, in which case great. But if it only goes half way, there or doesn’t at all, then you going to have to look at something bespoke. And that means either working with another provider to do it for you, or building your own team in-house.
[46:34] David
And for companies that are building AI teams, how real is the war for talent in AI?
[46:40] Janet
It is very real. It reminds me very much of when .NET first became a saying and anyone that even had .NET vaguely anywhere near their CV was being snapped up as a soon as they were on the market.
[47:45] Janet
Now the problem is that not all AI talent is equal — and it’s almost more difficult to work out the right sort of people. Because you have people coming from all stages of academia, from very junior — just finished a degree, that might have an AI component — to researchers who’ve been doing AI for many years and are effectively quite senior. But then you need to look at how they will fit into your organization and the value that they give you rather than the salaries they’re asking for. And that’s the difficulty, because you can end up with people, just hiring without due process and getting
people in. But without good recruitment practices, just like any other aspect of your business, you’ll end up with the wrong people and you won’t get a good solution at the end of the day.
[47:41] David
Do you think supply constraints will ease in the medium term or not?
I think so. I think there are so many people taking courses and, understanding that I think the good ones will float to the top and the people that aren’t effective will retrain on to the next thing that they think will get them a role.
[48:02] David
How can companies find the best AI talent?
[48:04] Janet
I think networking is a big thing. There are a lot of conferences, showcases, meet-up groups and if you get out there and you talk to people and you can excite them about your company then, you know, they’ll want to come to you rather than going through agencies necessarily or doing a job search.
Failing that, if you don’t have the time for that yourself then you need to find a specialist recruitment agent. Not someone who only knows the buzzwords, but who really understands it and can talk to these people in a language that will give them the confidence that they know what they’re talking about and you can represent you appropriately.
[48:41] David
How can startups compete against the high salaries being paid to AI professionals by today’s largest technology companies, including Google, Amazon, Facebook and indeed just incumbents in sectors like financial services?
[48:55] Janet
Well part of it is, you can offer something different at startups. And whether that’s a combination of equity and salary, or better working conditions or work life balance, or even more interesting
problems. Because there are so many jobs out there, AI talent can be very picky about the ones that they go for. So, you need to make the roles attractive. And not everyone is after the highest salaries. They want something intellectually fulfilling because they’re problem solvers at heart. So, if you can offer a role where they’ve got a lot of variety and challenge but they feel that they’re supported then they’re more likely to pick you over just a big name.
[49:33] David
And how can companies assess AI team candidates effectively? How do you separate the best from the rest?
[49:41] Janet
It’s really, really difficult. Because unlike traditional developers, where you could just give them a coding task as part of an interview process, AI solutions take a while to create. So, you then either say ‘I’m going to give you a task, come back to me in a period of time’, which is very risky because talent can get snapped up quite quickly, or you try and give them a shorter-term problem-solving task and accept that If you’ve done your due diligence and they’re not lying about what’s on their CV and they can show you their problem solving abilities and their intelligence to pick things up, then they’re probably going to be the right sort of person.
[50:19] David
How do you think about structuring AI teams? What balance between research, if any, and engineering do you think is best for building AI capabilities?
[50:30] Janet
It depends on what you have elsewhere in the business. From a startup point of view, you need your AI team to wear multiple hats. They need to create solutions that pretty much productise straight out. So, finding that balance is really tricky. However, understanding that the timelines to develop something will include an aspect of research is important. And whether that is just mentally adding twenty percent onto your timelines, knowing that they’re going to take a bit of time off, and supporting them in the time, that if they come up with something that says this is paper-worthy then say okay let’s just do this little bit of extra work to get that data. That really helps. But you’ve got to be aware of that and make sure that you can see that balance change in your team to ensure they’re happy or you’ll lose them to someone else.
[51:18] David
So, it’s something you evolve and evaluate continually over time?
[51:22] Janet
Absolutely. And if you have a really nice collaborative environment, where everyone feels happy to talk about it, then they will come to you when they feel that something’s not quite right.
[51:32] David
Help us understand the dynamics of managing an AI team. How do you keep an AI team happy and productive? And do their dynamics differ from other developers? Are they different beasts here?
[51:42] Janet
I don’t think the dynamics themselves differ from other development teams. And I’ve managed quite a few different teams over the years, some with more challenges than others.
I think in the AI teams, if you think of it just as a specialist development team then you treat it the same as any others. You make sure that the team’s happy, that they’re listened to, that they’ve got everything they need. And as a manager you’ve got to make sure that their blockers are removed. And whatever their blockers are, whether it’s not understanding a problem or not understanding the commercial aspect, you’ve got to break that down for them until they understand enough that you can just let them go away and do things.
[52:20] David
I’ll finish if I may with our traditional quick-fire round! Six questions, so just one or two word answers each.
[52:26] Janet
Okay!
[52:27] David
Firstly: is the promise of AI overhyped?
[52:30] Janet
Tricky. Yes, right now
[52:34] David
In which sector do you think AI will have the most profound impact?
[52:38] Janet
Transport.
[52:39] David
Do you think AI will destroy more jobs than it creates?
[52:43] Janet
Absolutely not.
[52:44] David
Should we worry a lot autonomous weapon systems?
[52:47] Janet
Yes.
[52:48] David
Will we achieve the AI singularity, when general AI triggers a period of unprecedented technological change. And if so, when?
[52:56] Janet
Yes. And I think…twenty years.
[52:59] David
And finally: should AI systems of sufficient intelligence have rights?
[53:04] Janet
Yes. Although I’m going to say we need to define sufficient intelligence because we don’t understand our own yet, properly.
[53:09] David
That seems a good place in which to finish. Janet Bastiman, thank you very much.
[53:13] Janet
Thank you.
[53:14] David
We hope you’ve enjoyed this episode of MMC Ventures’ “Beyond The Hype” podcast, presented in association with Barclays.
Follow up on Twitter @MMC_Ventures and explore our research at mmcventures.com
Don’t miss our next episode where Rob High, IBM Vice President and Chief Technology Officer at IBM Watson describes how AI will augment human capability with cognitive computing and create new opportunities for competitive advantage.
| [PODCAST] Episode 4: Understanding AI Technology | 0 | podcast-episode-4-understanding-ai-technology-105243a94df3 | 2018-06-18 | 2018-06-18 11:32:27 | https://medium.com/s/story/podcast-episode-4-understanding-ai-technology-105243a94df3 | false | 9,733 | A collection of stories and experiences from the early-stage technology and venture capital communities. Curated by MMC Ventures. | null | null | null | MMC writes | mmc-writes | VENTURE CAPITAL,EARLY STAGE,INNOVATION,INSIGHTS,TECH | MMC_Ventures | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | MMC Ventures | Find content from MMC Ventures at https://medium.com/mmc-writes | 4669d6e036f1 | MMC_Ventures | 1,652 | 156 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-17 | 2018-03-17 19:04:16 | 2018-03-17 | 2018-03-17 19:05:39 | 0 | false | en | 2018-03-17 | 2018-03-17 19:05:39 | 2 | 10525444aca8 | 0.856604 | 0 | 0 | 0 | Use of Machine Learning (ML) is a hot topic in cybersecurity, one which will undoubtedly shape the industry for years to come. To see… | 5 | Why A Computer Beating Poker Pros Is Great News for Cybersecurity
Use of Machine Learning (ML) is a hot topic in cybersecurity, one which will undoubtedly shape the industry for years to come. To see evidence of this we’d have to look no further than the booths at this most recent RSA Security Conference, where ML was promised as a solution for corporate cybersecurity problems. But why exactly will ML play such a prominent role, and how could it prove useful? Oddly enough the answer comes from the recent victory of ML in a game of poker.
A competition took place in Pittsburgh last month that matched top poker players against a Machine Learning system called Libratus. This tournament shared some similarities to previous victories in checkers,chess, go and Jeopardy!, all of which hinted at the promise of ML. In this particular competition, four players each individually faced the computer in a 1–1 match. Rather than the traditional setup (in which a poker face can be as important as the cards you have), this competition was more analogous to playing online- no player had access to facial expressions or visual/audio cues, and computers served as mediums.
Why A Computer Beating Poker Pros Is Great News for Cybersecurity
Use of Machine Learning (ML) is a hot topic in cybersecurity, one which will undoubtedly shape the industry for years…www.anomali.com
| Why A Computer Beating Poker Pros Is Great News for Cybersecurity | 0 | why-a-computer-beating-poker-pros-is-great-news-for-cybersecurity-10525444aca8 | 2018-03-17 | 2018-03-17 19:05:40 | https://medium.com/s/story/why-a-computer-beating-poker-pros-is-great-news-for-cybersecurity-10525444aca8 | false | 227 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Tech Elf | All about interesting and new stuff from around the Tech World. | d0437bce9f59 | techelff | 2 | 32 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-11 | 2017-12-11 18:39:52 | 2018-01-05 | 2018-01-05 19:01:35 | 3 | false | en | 2018-07-19 | 2018-07-19 01:26:19 | 5 | 1052beea30ae | 4.870755 | 0 | 0 | 0 | Reviewing opinion from text | 4 | Sentiment analysis
Reviewing opinion from text
This blog give brief introduction towards text Sentiment.
Introduction
Sentiment analysis — otherwise known as opinion mining — is a much bandied about but often misunderstood term. In essence, it is the process of determining the emotional tone behind a series of words, used to gain an understanding of the the attitudes, opinions and emotions expressed within an online mention. Its often used in determining opinion and trend and insights of text.
These insights can be used by policymaker and operating bodies as a quantitative feedback to make correction. Different models have been implement to understand text and create sentiment out of it. Measure of sentiment used to understand contextual opinion (sad, happy, disgust etc.). In this discussion we have restricted our self to sentiment classes (positive negative and neutral).
Guessing Sentiment
But said that creating automated sentiment from text is quite challenging,
human language and writing generally uses slangs, grammar and spelling
mistake are very common. As such creating automation around text-
sentiment required lot of preprocessing.
Sentiment example from a popular website
Data
After invent of internet technology extreme amount of text data has been generated. This data is generally available in forms of article, reviews, blogs, microblogs, comment etc. Lot of text corpuses are available on internet for free, many educational and research organization such as Stanford, CMU etc. provide large text corpuses for free, which can be used for model training. Free real-time text can be captured easily from free blogging website such as reddit, IMDB, or microblogging website such as twitter. API are provided for data collection which control data flow and query limits etc.
Different set of data required different type of preprocessing and modeling approach. Microblog such as twitter consist of sentences with inconsistent grammar, slangs, tagging(keywords) these formats are intuitive and easy to understand for a human but its very complex for a program (mathematical model) to generate information out of this. In these cases where whole information can be deduced from a single word or small part of sentence, word-list based method is used. Here sentiment of the sentence is deduced by weighing every word.
Other set of text data is article, blogs; which is generally collection of multiple (100,1000’s ) sentences. Here information is conveyed in form of small set of meaningful words and overall sentiment of text is aggregated result of most influential sentences in the text.
Modeling
Text based prediction models works on 2 basic steps, preprocessing text from string format to numerical format, and then built a machine learning or mathematical model on top of it. Traditional preprocessing methods converts text into vector which are sequences of frequency measure or absolute count of each word, these methods doesn’t account for context of a word in a sentence, plus their size in memory increase exponentially (a text with million different word, will create a matrix with million column !!). More advance method which are developed on multi layered neural network create numerical vector by reducing reconstruction error of creating text using same neural network. More details have been discussed in next section.
Preprocessing Step:
Preprocessing is major part of text mining. In the first step preprocessing module removes unwanted parts of
→ Cleaning (removing unwanted characters(?,#, @…etc.)
→ Stemming( removing verbal forms from word walk, walked, walking same as walk )
→ Lemmatization (good, better can be treated same)
→ Removal of url ( http, www, .com etc.)
… etc.
Sequential module of cleaning removes unwanted character, formatting etc. and simplify sentences to its minimal form thus remove any unasked information.
2. Next is conversion of sequence of words to numerical vector which can be processed further (either in classification or other tasks). Conversion of texts into numerical vectors can be done using some traditional or some more advanced ways…
→ Count vectorizer : its just one hot encoding of the sentence, generally results in large sized vectors(impractical for large text). Sparse matrixes are used to reduced size on memory
→TF-IDF : Term Frequency and Inverse Document frequency is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval, text mining, and user modeling. The tf-idf value increases proportionally to the number of times a word appears in the document, but is often offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general
→ word embedding( based on relational diagram):- Based on contextual bag of words, skipgram etc. , relational tree is developed. Tree considered the knowledge of its neighbours(word(s) coming prior/next to it). Thus a numerical vector is generated based on similarity/dissimilarity of a word w.r.t to others. Size of vector can be controlled based on user input(random number seed is generated accordingly) . Word2Vec is one of the famous word embedding.
→Using dnn(deep neural network), a multilayer neural network can be used to reconstruct the text via multiple hidden layer. The information at each hidden layer can be used as a characteristic parameter for the text sequence.
Modeling(classification):
Once sequence of text converted to numerical vectors, it can be further used in mathematical/machine learning modeling method for separating them into different class. Linearly developed vectors( count vectorizer) shall be used with naïve-bayes classifier only as it is naïve in considering each word as iid, also it comparatively fast as compare to other classification method.
But once texts are properly numeracallized, even a logistic regression will perform its best to deliver distinct classes(positive negative neutral).
MLSTM based model which is trained data from 84M amazon reviews can directly be implemented on text(because of its vast dictionary). Following is result from tested result. But due to its bulky model, it takes good amount of time for prediction.
Sentiment on Dinkirk movie
Another simple approach which can be used for sentiment is to aggregate sentiment of each word, but this approach will work for short sentences only.
AFINN are set of English words that have been rated from range to highly positive to highly negative in numerical form. Combining result of all these words(synonym) will give proper sentiment of each sentence. AFINN have overall advantage to be more useful than any other modeling method, as it can consider impact of emoji, slang words etc., but at the same time it understand sentiment on word rather than whole sentence.
Since Twitter posts are small short sentences, this approach along with slang word knowledge can be directly implemented. Following is code details for the same.
Sample Published Text-analytics module:
Using above techniques, I have developed sentiment analysis on twitter and news, blogs from webhose api.
In most of methods by default AFINN is used as sentiment model, but with using parameter (dnn) mlstm based model can be used.
https://github.com/Pked01/Analytics/tree/prateek/Text%20mining/Sentiment%20analysis/Twitter%20and%20Webhose%20sentiment
| Sentiment analysis | 0 | sentiment-analysis-1052beea30ae | 2018-07-19 | 2018-07-19 01:26:19 | https://medium.com/s/story/sentiment-analysis-1052beea30ae | false | 1,145 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | prateek khandelwal | null | 872be6d51812 | khandelwalprateek01 | 4 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 8b6c0a3b78a | 2018-09-20 | 2018-09-20 04:57:11 | 2018-09-20 | 2018-09-20 05:03:22 | 1 | false | en | 2018-09-20 | 2018-09-20 05:03:22 | 9 | 1053ddfdc739 | 1.766038 | 4 | 0 | 0 | Transparency is a core value that we at AQUA believe in. We understand that trust needs to be built between us and our community by showing… | 5 | Weekly Update #9
Transparency is a core value that we at AQUA believe in. We understand that trust needs to be built between us and our community by showing all our progress and setbacks. In the time leading up to our AQX Token Sale, we plan to post updates each week. We are looking forward to providing everyone with announcements, progress reports, as well as any changes to our goals or ideas throughout the process.
Announcements:
Our Pre-Sale phase #1 for 100% BONUS will end October 6th, 2018. Phase #2 for 50% BONUS will last until November 5th, 2018. You can see the live counter by signing up on our dashboard at ico.aquaintel.io
Our Telegram community is still growing rapidly so join us today!
Our Team will be attending HILTON Technology Leadership Forum on February 2019
We had an awesome time at Token Fest Boston! We also had a few interviews with links here. We will keep the community informed of possible meet ups!
We had our smart contract successfully audited by Quantstamp. You can read more about it here.
Development Updates:
We completed our draft wireframes for AQUA Mobile. We are now working on the architecture and UI/UX of these wireframes. Rigorous testing on our smart contract models is also in progress.
Legal:
Lawyers are in the process of drafting the terms & conditions for a SEC-compliant token sale.
Team:
We are always looking for talented individuals to accelerate AQUA globally. If you believe you can help contribute to our mission, feel free to reach out to [email protected]!
About AQUA:
AQUA Intelligence is developing a data-driven platform on the blockchain that will allow consumers to monetize and validate their personal data. AQUA has planned a complete roadmap to build AQUA Intelligence and is proven in their ability to execute with a demonstrable product in service. Part of their strategy is to gather data from their current products and discrete data sources to build the industry’s first comprehensive profile system for the international market. By leveraging Artificial Intelligence along with predictive analytics, these profiles will enable businesses to improve sales, retention, conversion and customer satisfaction, significantly. AQUA is poised to evolve in a multi-billion dollar industry with significant market potential. You can learn more about AQUA at aquaintel.io and help support our mission.
You can read more about AQUA Intelligence on our whitepaper.
Follow AQUA INTELLIGENCE on our Website, Telegram, Reddit, Facebook, or Twitter for the latest updates on AQUA development!
| Weekly Update #9 | 156 | weekly-update-9-1053ddfdc739 | 2018-09-20 | 2018-09-20 05:03:23 | https://medium.com/s/story/weekly-update-9-1053ddfdc739 | false | 415 | Empowering the Trust Economy | null | aqua.intelligence | null | AQUA INTELLIGENCE | aqua-intelligence | AQUA,AQUAINTEL,AI,AQX,BLOCKCHAIN | aqua_intel | Blockchain | blockchain | Blockchain | 265,164 | Harsha Cuttari | CTO of AQUA Intelligence | 1055a77da21d | harsha_38763 | 23 | 10 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-14 | 2018-05-14 14:14:42 | 2018-05-14 | 2018-05-14 14:42:09 | 0 | false | en | 2018-05-14 | 2018-05-14 14:42:09 | 0 | 1053e06a4c88 | 0.675472 | 0 | 0 | 0 | Currently, I am working with some person who handle a business of mixed AI and robotics in Japan. They are trying to automate legacy… | 1 | what is your purpose to live?
Currently, I am working with some person who handle a business of mixed AI and robotics in Japan. They are trying to automate legacy factory system based on many sensors and motors with Computing, AI and robotics. My pert is controlling analog relay and analog circuit and also I would try to enhance those system with FPGA/ASIC like electronics/software engineer. I still have fear to try new challenge however I just focus and think what is your purpose of your life / to live?
Some people said important thing is working hard and get enough money to spend time great in your life. My supervisors of my life who many times advice me to lead to great life. In addition, those person teach me above important thing is not really fact of life and you have to do just challenge and then be a pioneer who can change the world with facing problems!
So, that is why I want to try new challenges at the same time and change the world from Japan!
| what is your purpose to live? | 0 | what-is-your-purpose-to-live-1053e06a4c88 | 2018-05-14 | 2018-05-14 14:42:10 | https://medium.com/s/story/what-is-your-purpose-to-live-1053e06a4c88 | false | 179 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Kyohei Moriyama | null | 3496ffdaac05 | sasukeh | 1 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1de9a8b799ad | 2018-07-11 | 2018-07-11 14:19:47 | 2018-07-11 | 2018-07-11 14:25:06 | 1 | false | en | 2018-07-11 | 2018-07-11 20:33:37 | 6 | 10542c011e4a | 1.271698 | 0 | 0 | 0 | Justin Hsu, a recent graduate in the Department of Computer and Information Science (CIS) at Penn Engineering, has received the John C… | 5 | Computer and Information Science Alum Justin Hsu Receives Distinguished Dissertation Award
Justin Hsu
Justin Hsu, a recent graduate in the Department of Computer and Information Science (CIS) at Penn Engineering, has received the John C. Reynolds Doctoral Dissertation Award for exploring and formalizing several proofs critical to the field of differential privacy.
The award comes from SIGPLAN, the Association for Computing Machinery’s Special Interest Group on Programming Languages.
Hsu’s dissertation used methods from theoretical computer science to prove the privacy of three important differential privacy algorithms. He is now one of many Penn engineers to make contributions to the growing field, which uses randomized algorithms to protect the information of an individual when used as part of a larger data set. Such algorithms are gaining traction in the era of “big data,” when large companies like Google are capable of collecting and utilizing user information on an unprecedented scale.
“These are the first proofs of these properties to be presented in machine-checkable form — a milestone achievement, and one that had been attempted unsuccessfully many times before,” said Benjamin Pierce, Henry Salvatori Professor in CIS, one of Hsu’s dissertation advisers. Aaron Roth, Class of 1940 Bicentennial Term Associate Professor in CIS and one of the founding figures in the field of Differential Privacy, is also one of Hsu’s advisors.
“Concretely,” Pierce said, “these results constitute a fundamental advance in our ability to mechanize key properties of important randomized algorithms such as those found in the differential privacy and machine learning literature. Conceptually, they point the way to further synergies among ideas from algorithms, programming languages, and formal verification — the beginnings of a rich new area in which Justin will be recognized as a pioneer.”
| Computer and Information Science Alum Justin Hsu Receives Distinguished Dissertation Award | 0 | computer-and-information-science-alum-justin-hsu-receives-distinguished-dissertation-award-10542c011e4a | 2018-07-11 | 2018-07-11 20:33:37 | https://medium.com/s/story/computer-and-information-science-alum-justin-hsu-receives-distinguished-dissertation-award-10542c011e4a | false | 284 | University of Pennsylvania’s School of Engineering and Applied Science | null | null | null | Penn Engineering | penn-engineering | null | PennEngineers | Machine Learning | machine-learning | Machine Learning | 51,320 | Penn Engineering | null | af9f8605d39a | PennEngineering | 2,000 | 3 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 5d1fa6653fc1 | 2018-04-20 | 2018-04-20 15:09:17 | 2018-04-23 | 2018-04-23 12:11:14 | 4 | false | en | 2018-04-23 | 2018-04-23 12:11:14 | 7 | 1054ba7bbd20 | 2.107547 | 1 | 0 | 0 | PreSeries predictive algorithms crawl the web hungry for startup information. So far, almost 400k companies have been ruthlessly processed… | 5 | PreSeries Predicts! Top Scandinavian Startups in Q1 2018 (per country)
PreSeries predictive algorithms crawl the web hungry for startup information. So far, almost 400k companies have been ruthlessly processed, scored, and ranked. Today, we offer you a sneak peek at the PreSeries Dashboard and our latest ranking of the hottest startups in Scandinavia (Denmark, Norway, and Sweden).
Most Promising Danish Startup
GenieBelt has built a project management and communication platform to improve efficiency in the way construction teams collaborate and communicate. That, of course, includes multiple sub-contractors involved in a construction project of any size. Each project gets an overview and instant access to project updates, which can be filed in real time via a mobile app and mobile phone camera by construction workers and site managers actually on the ground. The simple digitisation of the project management and communication process, sometimes even replacing the legacy use of pen and paper and Excel spreadsheets has the potential to dramatically improve the productivity of the construction workforce.
Most Promising Norwegian Startup
The Future Group is a provider of an interactive mixed reality technology for television, mobile devices and desktop. The company’s platform enables users to play in a fully-rendered three dimensional (3D) environment. Based in Oslo, Norway, more than 100 people from over 20 countries work at the The Future Group, The company was founded by Jens Petter Høili and Bård Anders Kasin in 2013. Advisors include Nolan Bushnell, the legendary founder of Atari. Partners include FremantleMedia, Epic Games and Ross Video.
Most Promising Swedish Startup
NA-KD is an online store for apparel. The company provides a marketplace for fashion, apparel and accessories for women. It also offers sports wear, beauty products and books. It is one of Europe’s top 20 fastest growing companies, breaking new records every month and showcasing ourselves as one of the leading fashion company in the world. Born in 2015, NA-KD has been growing ever since, creating the hottest fashion trends across the globe and being seen on all the hottest influences and celebrities.
Love PreSeries AI-driven rankings? Stay tuned, follow us at @PreSeries & #PreSeriesPredicts
Want to build your very own startup deal sourcing & assessment platform with PreSeries? Get in touch here!
| PreSeries Predicts! Top Scandinavian Startups in Q1 2018 (per country) | 1 | preseries-predicts-top-scandinavian-startups-in-q1-2018-per-country-1054ba7bbd20 | 2018-04-24 | 2018-04-24 08:12:43 | https://medium.com/s/story/preseries-predicts-top-scandinavian-startups-in-q1-2018-per-country-1054ba7bbd20 | false | 373 | PreSeries is a Machine-Learning-as-a-Service (MLaaS) company that generates automated real-time insights and scores about startups and their industries in order to guide investment decisions for investors big and small. 400k+ startups analyzed with A.I. | null | preseriestech | null | PreSeries | preseries | VENTURE CAPITAL,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,STARTUP,INVESTING | preseries | Startup | startup | Startup | 331,914 | Fabien Durand | Business Swiss Army knife @preseries. Also, team @bigmlcom & @papisdotio. Co-organizer of #AIStartupBattle | d0b869bffd83 | TheFabienDurand | 97 | 414 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-21 | 2018-08-21 23:46:25 | 2018-08-30 | 2018-08-30 23:05:38 | 7 | false | en | 2018-08-30 | 2018-08-30 23:06:14 | 6 | 1054bdc0007b | 6.578302 | 5 | 0 | 0 | Project done by Seunghwan, BinaryVR software engineering intern | 5 | Deep Learning Data Pipeline Management: Building a Web Client to Optimize Data Testing Process
Project done by Seunghwan, BinaryVR software engineering intern
Have you ever wondered how deep learning/machine learning data pipelines are managed in practical applications? To learn more than just its theoretical notions, Seunghwan started an internship at BinaryVR and was assigned to a project regarding data pipeline management. As he expected, the project was full of opportunities to deal with enormous data and processing cycles with his own hands. In this article, you will learn about his project and the key challenges and concepts he encountered.
What Is This Project About?
The project is highly related to deep learning/machine learning data pipelines. In the same manner as other DL/ML powered technologies, BinaryVR’s facial landmark tracking technology (a.k.a BinaryFace*) is developed by these following major steps — data feeds, data training, and data testing.
* BinaryFace is a real-time facial landmark tracking solution of BinaryVR running on mobile with a RGB camera. To learn more: www.binaryface.com
Seunghwan’s project: building a web client and its tools to manage the data pipeline for the evaluation process in DL/ML to improve the performance of BinaryFace
If you take a closer look at the ‘data testing’ stage, there are three smaller steps in the pipeline. What we do here is to sort out failed cases and compensate the cases by re-training the algorithm.
What Does the Web Client Do?
The purpose of this project is building a web client to effectively manage the ‘data testing’ pipeline. Implemented tools in the web client work as an assistant for evaluators to sort out images with inaccurate tracking results. Images can be pre-annotated images or real-time images from an evaluator’s camera. Later on, they collect and reprocess the sorted data to send them back to the training stage. This helps our DL/ML model to re-learn and fix the algorithm by reflecting their mistakes.
Here is one of our tools, ‘valid landmark checker,’ and how it works:
Unfortunately, his project goal was not only about implementing each tool into the web client. By the nature of a web client, more important factors were optimization and structuration to reduce frontend waiting time for evaluators. Frankly, no matter how beautifully a website is built, we all know that we cannot stand excessive loading times that never seem to end.
Challenges for Building the Web Client
To build a web client for effective data pipeline management, Seunghwan’s key challenge can be summed up in two — optimization and structuration. Here are the main concepts we will be dealing with.
Optimization
- Asynchronous processing
- P2P direction connection
Structuration
- Dependency & Infrastructure Codification
Optimization: Asynchronous Processing
As our web client has to load and save an enormous amount of images and reduce frontend waiting time as much as possible, Seunghwan optimized servers in several ways. First, we integrated asynchronous processing to efficiently split and distribute workflow processing times.
The key difference between synchronous processing and asynchronous processing depends on when you receive a response from a server after a client sends an order. If a client has to wait for the processing time for the order and receive its result, we call that synchronous processing. If the client receives a response that the order is well received right after the order, that is considered to be asynchronous processing. Asynchronous processing allows the client to keep working on other tasks while the server is dealing with the previous task.
Example: Assume that we have three tasks to finish — 1. take out a burrito, 2. grab a latte, 3. take a walk. Synchronous processing would make it so that you need to wait in the taco shop counter until the food is ready for you to pick up. While a chef and a barista are making your burrito/latte, you are wasting your time waiting. On the other hand, asynchronous processing is akin to the restaurant allowing you to shop at other establishments as your order is being prepared. The restaurant/cafe will send notice to you by text when your meal is ready. As you probably can tell, you are the client and the chef/barista are servers. The same could be said to the server side. The servers can work without waiting for the clients to order since they can stack the orders beforehand.
At the macro level of the entire workflow, we designed our CPU and server operations asynchronous to receive tasks with no wasting time. Specifically, asynchronous processing is used for transferring tasks between clients and servers and time consuming image processing.
Optimization: P2P Direct Connection
Another challenging task was implementing the P2P direct connection to enable our real-time BinaryFace trial on a web browser. We wanted to connect the BinaryFace server to clients who want to test our tracker. Like a live video call, when the client sends its video input, our server receives the video and resend reprocessed video after tracking facial landmarks. This way, the client will receive the BinaryFace tracking result on the top of its video streaming.
Usually, there is a relaying server in the middle to connect the two ends. Skype’s video call service is also powered by this relaying server method. The downside is that the connection takes up more time and the relaying server can be overloaded as the communication has to stop by the relaying server all the time. This leads to slow processing times and additional server management costs.
We integrated WebRTC to implement P2P direct connection to solve this problem. The P2P direct connection allows two ends to be connected directly once the initial negotiation is done with the signaling server.
Unfortunately, due to the web browser dependency on video codec support, it was hard for us to receive/send videos. While the server we brought from the open source project (WebRTC) supports VP8 and VP9, Safari only supports H.264. We could add H.264 support, but it would have taken up too much time. Another fundamental threshold was the significant delay even after the integration as the BinaryFace server is comparatively far for some users.
So we concluded to shift our direction into implementing our DL/ML model directly into the browser utilizing WebAssembly. WebAssembly allowed us to run BinaryFace in native codes on the client ends resulting all browsers support and faster speed.
Structuration: Dependency & Infrastructure Codification
Let us move on to the web client structuration. To build a web client as we intended, you need to understand the dependencies of each infrastructure necessary and structure them without causing any interference.
While some might think our web client tools simply mark selected images, they are not as simple as it seems to be. The web client is composed in a complex structure with infrastructures such as a repository and certification for security. Infrastructures are connected to each other in certain rules and relationship which we call dependency. Deep understanding of AWS architecture was entailed as we utilize its beautifully pre-built infrastructures in developing the tools.
Seunghwan also used Terraform to codify infrastructure for better post management. Terraform allows users to define a data-center infrastructure as code so that structuring is partially automized and simplified.
Example: If we want to build a clothing wearing process, wearing a top, pants, socks, and sneakers are infrastructures. Wearing shoes cannot precede wearing socks or if you wear jeans, you cannot wear shorts again. Those rules and relationships are each infrastructure’s dependency. Terraform’s role is like a detector (or your parents) who would infer that you should wear socks before sneakers. Terraform automatically codifies the structure of the clothing wearing process for you.
Overall Review
Now our web client for deep learning pipeline management is ready! We covered how we manage the ‘data testing’ stage pipeline and which tools we built, how we optimized and structured our web client. Thanks for sharing your intern project and experience, Seunghwan!
Seunghwan’s Comment:
The project reinforced my understanding of web client structuration / optimization and infrastructure maintenance. Interning at BinaryVR offered me to learn new technologies that one cannot gain from studying alone. It feels great to know that my code is directly running as a part of the product development process. Topics we mentioned above such as WebRTC or Terraform are hard to know unless you have unique opportunities. I could not thank BinaryVR more for considerate guiding and support.
Explore open positions: https://angel.co/binaryvr/jobs
Send your resume for the internship: [email protected]
Learn working at BinaryVR: ‘What Made Engineers from Tech Giants Gather at a Small AI Startup?’
We are BinaryVR; aiming for seamless interaction between AI and people’s daily lives in the computer vision field. We develop the world’s top quality facial motion capture solutions, HyprFace and BinaryFace, keeping our core value in constant evolution.
| Deep Learning Data Pipeline Management: Building a Web Client to Optimize Data Testing Process | 54 | deep-learning-data-pipeline-management-building-a-web-client-to-optimize-data-testing-process-1054bdc0007b | 2018-08-30 | 2018-08-30 23:06:14 | https://medium.com/s/story/deep-learning-data-pipeline-management-building-a-web-client-to-optimize-data-testing-process-1054bdc0007b | false | 1,465 | null | null | null | null | null | null | null | null | null | Big Data | big-data | Big Data | 24,602 | BinaryVR | BinaryVR develops real-time facial expression tracking technology to encourage social interaction in the virtual world. | 6db936e9a2c6 | BinaryVR | 147 | 97 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-20 | 2018-02-20 01:43:56 | 2018-02-20 | 2018-02-20 02:32:28 | 3 | false | en | 2018-03-23 | 2018-03-23 13:51:55 | 29 | 1057882ccb1c | 11.146226 | 11 | 1 | 0 | This article is the second in the series on learning in two week sprints. | 5 | A Guide to Learning Artificial Intelligence in Two Weeks
This article is the second in the series on learning in two week sprints.
In this sprint, I delved into the vast field of artificial intelligence. The bullet points and links I list are resources that I found to be the most useful while completing my sprint. It is not meant to be an exhaustive list as that would be nearly impossible, but I hope that it can serve as a guide for you to potentially complete a similar sprint on artificial intelligence.
Part I focuses on the detailed process of the sprint while Part II focuses on my specific learnings.
The first overview article for why I completed a series of sprints can be read here.
Part I
Why artificial intelligence?
Over the past few years, I’ve heard numerous organizations say that they are applying “machine learning” (ML) and artificial intelligence (AI) to their products to stay innovative. But what does this actually entail? One of the reasons why I delved more into this topic was to move past those buzzwords. I wanted to focus on the deeper technical details while gaining a general understanding of the bigger picture. Additionally, I had the goal of assessing if this was an area that I could make a bigger commitment towards.
My framework for learning
My outline consisted of questions to answer, individuals and organizations to research or talk to, articles to read, videos to watch, events to attend, and a technical project to complete. I learn best when I’m working on a project, but the readings, videos, and events are important too since they give me foundational knowledge and perspective to begin the project.
First, I began writing out some of the questions to answer:
Bigger picture
What are the categories and top applications of AI?
How is AI currently being utilized in different organizations?
Where is the field headed in the next 5 years?
What does the day to day look like for someone who is a machine learning researcher?
What sort of background is most commonly found in this field? How easy is it to break into without a PhD?
How does the field of computational neuroscience relate?
Technical
What is a neural net composed of?
What are the different types of neural networks? What are their limitations?
What are the mathematical concepts to know in order to learn machine learning and deep learning?
How do I train a dataset? What kinds of datasets do I need?
What machine learning open source projects can I contribute to?
Individuals in artificial intelligence
Next, I outlined the individuals in the field to keep track of during the sprint. These were individuals that either have contributed to the field significantly or have written high quality content. I also had conversations with various friends in AI/ML roles.
Hod Lipson (Professor at Columbia and director of Creative Machines Lab)
Fei Fei Li (Stanford AI professor)
Andrew Ng (Stanford AI professor, Coursera co-founder, former Baidu Chief Scientist)
Yann LeCun (founding father of convolutional nets, director of Facebook AI Research)
Ian Goodfellow (Research Scientist at Google Brain)
Andrej Karpathy (Formerly at OpenAI, now Director of AI at Tesla)
Delip Rao (founder of Joostware, referenced in a16z)
Shivon Zilis (VC at Bloomberg Beta, written a lot on MI)
Sam DeBrule (voice of Machine Learnings)
Nathan Benaich (investor and AI technologist writer)
Siraj Raval (developer evangelist and Youtuber who makes AI education videos)
Key organizations
There are numerous organizations dedicated to artificial intelligence research, applications, ethics, etc. I’ve highlighted a few below:
Google Brain (deep learning AI research team at Google)
OpenAI (nonprofit dedicated to researching and creating safe general AI)
Machine Intelligence Research Institute (MIRI)
DeepMind (AI research company acquired by Google)
Future of Humanity Institute
Asimov Institute (check out their blog)
NYAI (NYC speaker series meet-up)
Potential projects to complete
There were a variety of potential projects to pursue in this two week timeframe. Each of the three below touches upon a different aspect of artificial intelligence.
Twitter sentiment analysis using Python utilizing natural language processing
Image classification AI utilizing computer vision
Handwriting recognition AI with Tensorflow library
I decided to work on implementing a handwriting recognition AI using Tensorflow and the MNIST handwritten digits dataset. MNIST has been called the “hello world” of deep learning and contains 60,000 images of handwritten digits to train a model.
With regards to image classification, I was also able to use the Clarifai API to easily train a neural net to recognize various objects, such as a plane, wing, or shirt. According to Crunchbase, Clarifai is “a startup that provides advanced image recognition systems for customers to detect near-duplicates and visual searches.” Their value add is that they abstract away much of the complexity that comes with building your own image classification models.
MNIST dataset sample from Savio Rajan
Articles, videos, and textbook
I’ve curated a list of articles and a textbook to parse through. There is an endless amount of information online, but I found the ones below to be the most insightful.
On the technical side, I spent about a week reading the first five chapters of a friend’s textbook, Fundamentals of Deep Learning. I found some parts difficult to grasp at first, and it wasn’t until I started reinforcing it with online videos and other instructional content that it began making more sense. Specifically, Siraj Raval’s Youtube videos were highly engaging to watch and easy to understand. It helps to have the content explained from different perspectives.
I had read the book Superintelligence before I started this sprint, but I also recommend it to get a sense of the potential dangers of artificial intelligence development.
Bigger picture articles
a16z AI Playbook (an overall survey of the field of AI)
Why Work in AI by 80,000 Hours (how to become involved in shaping the future of AI)
What is Artificial Intelligence? (an overview post by Sam DeBrule)
The Current State of Machine Intelligence (graphic and write up of the startup space by Shivon Zilis)
The AI Revolution (a Wait But Why series on what the development of AI will mean for humanity’s future)
Superintelligence by Nick Bostrom
Technical articles and textbook
Artificial Neural Network Introduction (about training a neural net to recognize handwriting)
Deep Learning 101
A Beginner’s Guide to Convolutional Neural Networks (great write up by a computer science student about a more advanced type of neural net)
The Unreasonable Effectiveness of RNN
Fundamentals of Deep Learning (a textbook written by my friend Nikhil; it’s a beginner’s book catered towards those who have some background in linear algebra, matrix multiplication, partial derivatives, vectors, and Python)
Videos
How We’re Teaching Computers to Understand Pictures (a TED talk by Fei Fei Li)
Nuts and Bolts of Applying Deep Learning (a longer lecture by Andrew Ng)
The Promise of AI (for a less technical overview of the field’s history and future)
In-person classes and workshops
In addition to the firehose of online information, there are also in person classes and workshops to learn the technical details of machine learning. They usually require some coding background. Although the classes are expensive, I can see them being useful because you are surrounded by other engineers learning the same material and a teacher to guide you through the process in real time. I attended a free two hour workshop in San Francisco but haven’t attended any of the daylong ones. I also haven’t seen any classes yet outside of San Francisco.
Lukas Biewald’s technical introduction classes on machine learning in San Francisco. You can find the materials he uses for his classes here.
Andrew Ng announced a two day Bay Area Deep Learning School in 2016, but the website seems to have been deprecated. You can still watch the videos here.
If you are looking for online courses instead, below are a few options for different budgets. If you’re looking for mini courses that you can go through in a week or two, then I recommend Zenva’s courses.
Guide to Siraj Raval’s Youtube videos. His Youtube channel can be found here.
Deep learning nanodegree by Siraj Raval and Udacity
Artificial intelligence nanodegree by Udacity
Zenva’s Python and AI mini courses
Part II
The purpose of this section is to provide a shortened overview of what I learned in the two weeks after following the framework I outlined.
Terminology
To start off, it’d be helpful to know what some of the basic terminology means.
Artificial intelligence is a broad sub-field of computer science. According to the Oxford dictionary, it is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
There are two main categories of AI. Artificial narrow intelligence (ANI), or “weak AI,” is an AI system that can only specialize in one area, such as beating the best Go player in the world. This technology exists today in our smartphones, apps, etc.
Artificial general intelligence (AGI), or “strong AI,” is building systems that think and reason exactly like humans can. This doesn’t exist yet.
There is a third category of artificial intelligence, artificial superintelligence (ASI), that is described by Nick Bostrom in his book Superintelligence. This supersedes artificial general intelligence in that an ASI system, according to Bostrom, “is an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.”
There are several subdomains of research within weak AI:
Machine learning
Computer vision
Natural language processing
Machine learning is an application of AI, and is used to implement aspects of computer vision, speech recognition, and natural language processing. It is based on learning from example with a set of data rather than giving a machine a set of rules to follow.
Deep learning is an extension of machine learning that takes advantage of the more powerful computational power and larger datasets of today to create complex neural networks.
Computer vision has the goal of mimicking human visual capabilities, processing and identifying images and producing appropriate output.
Natural language processing (NLP) is the area of computers understanding, interpreting, and manipulating human language.
Startups and trends
There are numerous startups applying machine learning and deep learning algorithms to their products. I’ve highlighted a few below.
There are self-driving trucks and cars, but what about other transportation systems? To protect security and SWAT teams, Shield.ai is building a drone that will enter an unknown building to map it out first so that the SWAT team can minimize danger to themselves. Zipline is a company that uses drone technologies to deliver blood safely in Rwanda.
In the word of computer vision, image classification is becoming increasingly accurate with the advent of more complex neural networks. Competitions such as the prestigious ImageNet have teams compete to see which can produce the highest accuracy. Companies such as Pinterest has a feature to allow you to use your own camera to create input from, say, your house. Pinterest will then classify the images in real time and suggest similar products. In agriculture, Blue River Technologies uses cameras mounted on tractors to take pictures of lettuce to determine the precise amount of fertilizer to apply to each lettuce head.
Robotics companies such as Knightscope and Cobalt Robotics are creating security guard robots to monitor buildings. Orchard Supply Hardware, based in San Jose, uses a robot called Oshbot to guide customers to the item that they are seeking in the store.
Reading about the technologies built to predict the future excited me the most. In healthcare, companies such as Freenome is working on diagnosing cancer more inexpensively and quickly through reading DNA that free floats in your bloodstream. This is much quicker than doing a tissue biopsy to send to a lab to analyze. In the media space, Buzzfeed has built a tool that will predict whether or not a video that’s performing well in one country will also gain traction in another. UnifyID aims to not have users need to type in a password and instead use how you swipe, type, etc. to recognize you automatically for authentication. You can watch a video of their demo here.
When it comes to natural language processing, artificial intelligence has made some exciting progress as well. In trial law, Everlaw is working to more efficiently pore over large numbers of documents during the discovery process by automatically categorizing documents for attorneys to review. The Google Inbox team’s smart responses feature now powers 10% of all mobile email responses.
Photo of the Oshbot store robot from Fellow Robots
Tensorflow
There are several well known libraries that engineers use for machine learning: Tensorflow by Google, Keras, Caffe, Scikit-learn, Theano, and Torch. I decided to try Tensorflow because of the large number of resources I found on it and its popularity with engineers. Additionally, I was able to find tutorials on how to implement the handwriting recognition tool using Tensorflow.
Tensorflow was released in late 2015 as a deep learning open source project to great fanfare. The name Tensorflow came from how it completes computations, which are done on data flow graphs. The nodes of the graphs are mathematical computations, while the edges are the data, which are represented by multidimensional data arrays, or “tensors.”
With the help of a Zenva Tensorflow tutorial and some debugging, I was able to train a model to recognize handwritten digits. The graph below shows the increase in accuracy over time as the neural network model is trained on more data.
Changes in accuracy of Tensorflow neural network model
Theoretical technical learnings
There are several similarities in AI to neuroscience. Parts of AI are inspired from the biological brain. For example, the fundamental unit of a model, a perceptron, is inspired from a biological neuron and translated into its mathematical expression. Perceptrons constitute a neural network, and neural network models are trained to provide an output. Neural network models are created in order to teach a machine how to learn from example.
A neural network consists of various layers of n depth. A basic network has an input layer and an output learning, but deep learning models can have many hidden layers in between the input and output layers.
A perceptron takes in several inputs, denoted as x, as well as weights, w, and is modeled as the sum of all (x * w) + b, which b is known as the bias. The bias is a quantity that is tweaked and added to the model to get to the proper output.
An activation function g is applied to this weighted sum to produce an output. The most commonly used activation function is ReLU, but there is also tanh and sigmoid and others (I didn’t delve into the details of their composition). The output is then passed onto another perceptron that is one layer deeper.
We minimize our error and maximize accuracy by training a neural network. This is done by utilizing a math concept known as gradient descent. We measure the error with a cost function, and we reduce it by minimizing the function. The idea behind gradient descent is changing the cost function with partial derivatives to attribute changes in cost function changes in a particular weight or bias. We can then update those accordingly to get a more desired output.
Backpropagation of errors is measuring exactly how much gradient descent is changing, this rate of change. It works due to the chain rule of partial derivatives. We take the error that we find at the end for the output and propagate that error backwards through the neural network. It might be easier to look in the visual notes to see this visually.
NYAI meetup
During the sprint, I also attended a New York Artificial Intelligence (NYAI) meet-up where I listened to Hod Lipson of Columbia University speak about his work on soft robots. He directs the Creative Machines Lab, which is “interested in robots that create and are creative. [They] explore novel autonomous systems that can design and make other machines automatically.” His TED talk on self-aware machines is one of the most viewed TED videos on AI.
He gave a technical talk about his latest projects around the topic of discovering whether or not machines can discover scientific laws automatically from data. It’s a lab whose work I’ll continue to follow.
Next steps
After two weeks, I gained a better grasp on some concepts in AI, but since it is a vast field, I also felt like I had barely scratched the surface. Overall, it was thoroughly enjoyable to deep dive into the details of artificial intelligence and build a mini project. I’ve learned that artificial intelligence can be a powerful tool, and I would like to continue to apply these learnings in various capacities, whether it’s through open source contributions, meet ups, or a more formal role.
| A Guide to Learning Artificial Intelligence in Two Weeks | 89 | a-guide-to-learning-artificial-intelligence-in-two-weeks-1057882ccb1c | 2018-04-26 | 2018-04-26 22:28:55 | https://medium.com/s/story/a-guide-to-learning-artificial-intelligence-in-two-weeks-1057882ccb1c | false | 2,808 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Jessica Lee | null | f14231264e1c | jessicamleee | 416 | 45 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 32881626c9c9 | 2018-09-16 | 2018-09-16 17:04:06 | 2018-09-16 | 2018-09-16 18:46:01 | 3 | false | en | 2018-10-01 | 2018-10-01 20:42:01 | 4 | 10588e13ce9d | 4.795283 | 2 | 0 | 0 | This lecture is an optional introduction to Python Programming for Machine Learning following up from A Machine Learning Preface, where we… | 4 | Python Programming Introduction for Machine Learning
This lecture is an optional introduction to Python Programming for Machine Learning following up from A Machine Learning Preface, where we look into the code more closely. It is meant as a coding introduction before the coding section of Machine Learning 1.
If you would like to follow along with the code, the code is posted here: Code for ML Python Programming Introduction.
To set up python on your computer, read my tutorial for setting up anaconda on windows and install the necessary libraries: Python Installation
Code Analysis
Python is a wonderful programming language that can do almost anything a user can imagine. But a lot of functionality we need to make our code simpler comes in other packages that we need to import to use.
Importing Packages
For this project we needed four key packages. We import
Numpy for mathematical processing.
Pandas for Excel/SQL-like organization
sklearn for off-the-shelf ML
matplotlib for drawing graphs.
When we say import package_name, we are making functions within that library available for the rest of our script.
When we as “as X” at the end, we are saying “Import this library like NumPy, but let’s call it np for the rest of the script to make code prettier”
When we say “from Sklearn.linear_model”, we are saying, “within sklearn, which is a big package, look in the linear model section”.
When we say “from X import Y”, we are saying look inside X, but only make Y available from that library.
We now can say things like np.mean() to use NumPy to calculate the mean of a vector. When we do this, we are using the mean function within the NumPy library.
Defining Variables
Now that we have packages imported, we start by generating our data.
When we say NUM_SAMPLES = 200, we are creating an object called NUM_SAMPLES, and setting it’s value to 200.
We start by defining x1-x5 using the NumPy rand function. We create each variable using numpy.random.rand(N), which creates N random numbers from 0 to 1. We use NUM_SAMPLES for N, to create the same number of values for each variable.
Note that NUM_SAMPLES is in all caps, which is a code cleanliness preference by Python users to represent GLOBAL CONSTANTS. This is different than x1-x5, which are all calculated variables.
Math should look much like you would expect, for instance
x2 = x1 + rand(N)/10
means “create N random numbers, divide them by 10, and add them to x1. You now have 100 new random numbers that are similar to x1. Call this x2”
The most complicated line above is
df = pd.DataFrame({‘x1’:x1, ‘x2’:x2,’x3':x3,’x4':x4,’x5':x5, ‘y’:y})
Reading left to right, we use Pandas to create a DataFrame. A DataFrame is the core pandas object, and works similar to an Excel table. The () after the DataFrame is to call a function, and the insides are the inputs to create the DataFrame. Inside, we see {}, with some pairs of “‘name’:variable” inside, separated by commas. Here, we are mapping names to variables to create our DataFrame (like creating an excel table with column names and data).
The last line, df.head(), takes our DataFrame which we called df, and calls the head function, hence “.head”. The () at the end just says “call this with no specifics”.
Head inspects the top values of a DataFrame. The top values of our DataFrame are below:
The above should look similar to a SQL or Excel table. It is a 2D table with rows representing days or data points, and columns being features and the target variable Y. Our dataset consists of 200 days of these input features and the corresponding daily sales.
Visualization
Next we visualize our dataset using matplotlib’s pyplot visualization library.
We start by calling the figure function the create a figure to draw on. We also specify figsize=(10, 5) within the function to make a big, rectangular image.
Now we want to plot two scatterplots side by side, so we Google how to plot two images in one figure using matplotlib. We are told to graph each plot with a plt.subplot.
We call plt.subplot() and pass in 1, 2, 1. The first number represents how many rows of images we have, the second number is how many columns of images we have, and the third number represents which of the plots we are choosing.
So 1, 2, 1 represents 1 row, drawing the left of two images. 1,2,2 represents drawing the right image.
When we draw the images we use scatter() and pass in x and y. Scatter is accepting two lists of values, and for every pair it puts a point at that x, y location.
For instance scatter([1, 3, 2], [3, 0, 0]) would draw three datapoints, at [1, 3], [3, 0], and [2, 0]. And if x = [1,3,2], and y=[3,0,0] then if we called
scatter(x, y), we would see the same thing.
Finally, we wrote plt.show() to ask matplotlib to show our current graph.
Preparing Input Data
Let’s prepare input and output data for regression:
Looking above, we can grab columns of a DataFrame by using DataFrame[columns to select].
If we just want ‘y’ for our target variable, we use df[‘y’]. If we want all of our x’s as our input variables, we use a list of column names. The list of columns is created with [‘x1’, ‘x2’, ‘x3’, ‘x4’, ‘x5’], and then we index into df using that list. Hence, df[[‘x1’, ‘x2’, ‘x3’, ‘x4’, ‘x5’]]
Code for Regression
To run regression with sklearn, we need to create a regression model. To create a regression model using Ridge, which is a type of Linear Regression, we call Ridge(). In this example we pass in a small number, 1e-3 or .001, which sets a default parameter to 1e-3. Please ignore this.
We then take that model, and call .fit(X,Y) to tell the model to learn from the input and output data.
Wow that was easy. One Line of code to perform regression.
After, we run one line to ask what it learned. We do that by asking the regression model for its .coef_, which we saw on the sklearn Ridge website is where the parameters live. We then call .round(2) to round the values to two decimal places.
We learned a little bit of Python code! Numpy for math, Pandas for data organization, sklearn for machine learning, and matplotlib for visualizations are some of the most crucial libraries for performing Machine Learning.
Please let me know if you’d like me to expand this post to teach the code for custom regression using Tensorflow.
Using this in a practical setting: https://medium.com/@leetandata/machine-learning-engineering-1-custom-loss-function-for-house-sales-estimation-95eec6b12457
| Python Programming Introduction for Machine Learning | 5 | machine-learning-python-programming-introduction-for-business-people-10588e13ce9d | 2018-10-01 | 2018-10-01 20:42:01 | https://medium.com/s/story/machine-learning-python-programming-introduction-for-business-people-10588e13ce9d | false | 1,125 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Data Science | data-science | Data Science | 33,617 | Lee Tanenbaum | My Machine Learning Blog leetandata.com medium.com/@leetandata github.com/leedtan | cb217931d2c | leetandata | 9 | 0 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-22 | 2017-12-22 07:57:25 | 2017-12-22 | 2017-12-22 07:57:42 | 0 | false | en | 2017-12-22 | 2017-12-22 07:57:42 | 1 | 105a35b4771a | 0.196226 | 0 | 0 | 0 | BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial… | 3 | Experts explain why machine learning will make some jobs easier
BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.
https://www.geekwire.com/2017/experts-explain-machine-learning-will-make-jobs-easier-kill-off-others/
#machinelearning #advancedanalytics
| Experts explain why machine learning will make some jobs easier | 0 | experts-explain-why-machine-learning-will-make-some-jobs-easier-105a35b4771a | 2017-12-22 | 2017-12-22 07:57:43 | https://medium.com/s/story/experts-explain-why-machine-learning-will-make-some-jobs-easier-105a35b4771a | false | 52 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sergio Gaiotto | null | 746ab14c82dc | sergio.gaiotto | 35 | 56 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-12 | 2017-10-12 05:32:18 | 2017-10-14 | 2017-10-14 10:57:18 | 2 | false | en | 2017-10-15 | 2017-10-15 14:41:54 | 4 | 105a8712aed0 | 3.300314 | 12 | 0 | 0 | As an Engineer Manager with more than 20 years of experience I have seen many changes that completely disrupted different areas: “Web 2.0”… | 5 | Why Big Data is pushing us towards Machine learning
As an Engineer Manager with more than 20 years of experience I have seen many changes that completely disrupted different areas: “Web 2.0”, “Cloud computing”, “Mobile-first”, “Big Data”, etc. The new kid on the block is “Machine learning” and it is definitely at its peak, one example is perfectly described by CB Insights on how startups coined the terms related to Machine learning to raise their valuation: “If you want to drive home that you’re all about that AI, use terms like machine learning, neural networks, image recognition, deep learning, and NLP. Then sit back and watch the funding roll in.”
In 2016 Gartner’s famous Hype Cycle of Emerging Technologies added the term “machine learning”, since it is being used everywhere.
But let’s put things in perspective and try to understand why is this happening now.
Huge amounts of data are being streamed from phones, computers, TVs and IoT devices. Every day the equivalent of 530,000,000 million digital songs or 250,000 Libraries of Congress worth of data are being created globally.
The data gathered grows exponentially, creating a paradigm shift on how we store and process large data sets. This affects the data infrastructure and long-term devops strategic decisions we need to make in order to support the increasing demand for scalability and concurrency.
But… It is not the quantity of data that is revolutionary but, that we can do something with the data.
For most organizations leveraging massive data sets is a problem since not everyone knows how to deal with terabytes of data. It takes highly specialized teams to analyze and process insights. When the data is huge, it is not humanly possible to understand which variables affect each other.
This is where Machine Learning fits in and why it will ultimately change the way we handle data. Regardless of the amount, researchers need to ask the right questions, design a test, and use the data to determine whether their hypothesis is right.
Let’s tidy things up
Here is an interesting Venn diagram on machine learning and statistical modeling in data science (Reference: SAS institute)
Artificial Intelligence — a term coined in 1956, refers to a line of research that seeks to recreate the characteristics possessed by human intelligence.
Data science — uses automated methods to analyze massive amounts of data to extract valuable knowledge and insight.
Machine Learning — is a recent development that started in the 1990s when the availability and pricing of computers enabled data scientists to stop building finished models and train computers instead.
Building data science teams
Most organizations create heterogeneous teams that include three primary data-focused roles: data scientists, data analysts and data engineers.
Here are the key differentiators between the data-focused roles:
Data Engineer — basically these are developers that know how to handle big data. In general they would have majors in: Computer science and engineering.
Data Analyst — they translate numbers into plain English. A data analyst’s job is to take data and use it to help companies make better business decisions. In general they would have majors in: Business, economics, statistics.
Data Scientist — they combine statistics, mathematics and programming. They have the ability to find patterns by cleansing, preparing, and aligning data. In general they would have majors in: Math, applied statistics, operations research, computer science, physics, aerospace engineering.
Wrapping up
Let me play with Dan Ariely’s quote on big data as others already did:
“Machine Learning is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
So, can everybody use machine learning to increase ROI? probably not and I don’t think that in the near future all our jobs will be automated by bots. But, we definitely need to be able to massively process data at scale using machine learning in order to provide solutions whether it is to increase ROI in our organizations or to help solve global urgent problems like cancer research, natural resources scarcity and more.
Make sure you hire the right team and ask the right questions, hopefully your organization’s data + data science will provide you with the answers you are looking for.
Let me leave you with one last quote:
“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.” — Bill Gates
So, start learning and researching so you can find the right answer for you and your data.
| Why Big Data is pushing us towards Machine learning | 27 | why-big-data-is-pushing-us-towards-machine-learning-105a8712aed0 | 2018-04-27 | 2018-04-27 01:31:54 | https://medium.com/s/story/why-big-data-is-pushing-us-towards-machine-learning-105a8712aed0 | false | 773 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Denise Schlesinger | Software Architecture and Tech Enthusiast. Engineer Group Manager @TG-17. | 5be4408574c4 | deniseschlesinger | 138 | 85 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 776b8e643c9f | 2017-10-31 | 2017-10-31 22:18:56 | 2017-11-02 | 2017-11-02 12:54:47 | 3 | false | en | 2018-01-08 | 2018-01-08 17:12:43 | 32 | 105ac6053de4 | 13.448113 | 53 | 0 | 0 | Using AI in renewable energy production prediction, energy grid balancing and next generation understanding of energy consumers. | 5 |
Artificial Intelligence and the future of energy
Using AI in renewable energy production prediction, energy grid balancing and next generation understanding of energy consumers.
About the author. Dr. Tadas Jucikas is a co-founder and CEO at Genus AI — a next generation artificial intelligence platform which enables businesses to interact with their customers in an emotionally intelligent way. Tadas advises WePower, a revolutionary blockchain-based green energy trading platform, on how to harness the power of data science and artificial intelligence. Tadas has completed a PhD in Computational Neuroscience at the University of Cambridge, UK.
Artificial Intelligence will change the future of energy. I am often asked how Artificial Intelligence (AI) can be used to help interpret the past, optimise the present and predict the future. Having helped build data science and machine learning solutions in both private and public sectors I’m always pleasantly surprised by the multitude of applications and opportunities that AI technology can offer.
There are only three limitations to building successful AI systems — computing power, availability of data and imagination. More than often the latter is the hardest to realise.
Even though the successes are just starting to emerge AI has proven that it can revolutionise energy as well. The sector already depends on optimisation and predictions: energy production, energy grid balancing and consumption habits.
AI offers a unique solution to these challenges and due to its capacity to evolve and learn it will undoubtedly become a critical component of the energy industry.
AI plays a critical role in the WePower platform as well. The WePower platform is the next generation utility company and has in-built capabilities to use AI technology for renewable energy forecasting, grid balancing and in-depth consumer understanding.
Early Traction. Emerging technology team lead Dan Walker at the British Petroleum’s (BP) Technology Group says [1]:
“AI is enabling the fourth industrial revolution, and it has the potential to help deliver the next level of performance.”
AI provides an inspiring area for talented individuals as a career path. Solving energy problems directly relates to improving living conditions for generations to come. Bill Gates, founder of Microsoft, wrote an online essay to college students graduating worldwide in 2017 where he stated [2]:
“If I were starting out today… I would consider three fields. One is artificial intelligence. We have only begun to tap into all the ways it will make people’s lives more productive and creative. The second is energy, because making it clean, affordable, and reliable will be essential for fighting poverty and climate change.”
The last one that he mentioned was biological sciences.
It has prompted students across the world to enrol in these subjects and study as well as discuss challenges. Harvard University’s Franklin Wolfe, a graduate student in the Earth and Planetary Science programme writes a great overview of the challenges of the grid and how a new and different ‘smart grid’ could be enabled by AI [3].
However, has there already been a success story where AI helped the energy industry? The answer to this question requires a bit of research. A lot of pilot programmes are not publicised and are still in their early stages. However, a mounting body of evidence indicates a bright future for AI in the energy sector.
A great example of early traction is Google’s DeepMind technology which became famous for teaching itself to play the ancient game GO through technology called reinforcement learning and becoming the World’s number one player [4]. The team behind the technology announced that its machine learning algorithms could cut electricity usage at Google’s data centres by 15% [5]. The predictions centred around anticipating a higher load on the data centres’ cooling systems and controlling equipment more efficiently. This decreased the energy usage by 40% percent and translated into saving hundreds of millions of dollars for Google over several years [6].
This announcement prompted discussions on how such an approach could be used elsewhere. One of such applications is with the National Grid in the United Kingdom.
“We are in the very early stages of looking at the potential of working with DeepMind and exploring what opportunities they could offer for us,” said National Grid. “We are always excited to look at how the latest advances in technology can bring improvements in our performance, ensure we are making the best use of renewable energy, and help save money for bill payers.” [5].
The focus of this partnership would be to use AI technology to balance energy supplies to the National Grid. The expected savings are substantial — DeepMind aims to cut the national energy bill by up to 10% [7]. The abundance of historical data supports such advanced predictive capabilities.
Several other big players are already active in this space.
Originally, IBM had everyone guessing why it had acquired The Weather Company. Many joked that they misinterpreted what it means to compute on the “cloud”.
However, it was revealed that IBM planned to launch a new product called Deep Thunder which will offer precision weather predictions at a 0.2 mile to 1.2 mile resolution. The focus of the Watson product will be on how minor changes of weather can affect consumer behaviour and help businesses to react more effectively [8].
IBM has also worked extensively in solar energy prediction. Even as early as 2013 IBM’s research division partnered with the Department of Energy in the United States on leveraging machine learning for clean power. IBM has over 200 partners that use their solar and wind forecasting technology [9]. The technology is built by combining dozens of forecasting models and then integrating a multitude of data sources about the weather, the environment, atmospheric conditions and how solar plants and the power grids operate. The predictions range in availability anywhere between every fifteen minutes up to thirty days in advance. IBM’s product manager Hendrik Hamann claims that the self-learning weather model and renewable forecasting technology is 50% more accurate than the next best solar forecasting model [9].
Early signs of AI changing the industry sector are also present in the technology community, not only large companies. A data scientist Evan Baker posting on a Medium shares exciting results:
“Using a random forest model, I was able to predict expected annual savings to around $15.00, a 75% increase in accuracy over predictions generated by the National Renewable Energy Laboratory (NREL)” [10]
Evan outlines his approach with the support of a homemade pipeline that uses publicly available data and open source tools. His machine learning models give highly accurate predictions mapping out the expected return on energy generated by a prospective solar panel. He made the predictions available on his website; http://solarcalculator.xyz.
AI powered tools such as this make information regarding the ability to switch to solar more accessible and readily available for everyday use. It has the potential to bring down soft costs associated with installation and may accelerate the transition to renewable energy and micro-production in the future.
AI technology background. With the rise of cloud computing and the ever-decreasing costs associated with computations, now and in the future this technology will be more and more widely available. One of the most process heavy steps in AI systems is model training and validation. Being able to pay per minute or even second for the use of computing power removes the need for large upfront investment and data centre maintenance costs. With Google Cloud, IBM Bluemix and Amazon Cloud the power to perform highly complex computations is readily available for everyone today [11].
The systems architecture for machine learning which underpins artificial intelligence is also seamlessly provided by cloud solutions. Some of the computations require highly parallel processing capabilities and are best performed by hardware specifically built for such purposes. Cloud solutions which offer native architecture for such computations are already available. A good example is Google’s Tensor Flow platform that enables next generation deep learning [12].
One of the unexpected beneficiaries of this trend are graphic processing unit (GPU) manufacturers who tried for decades to create highly paralleled processing capabilities for the gaming industry. Nvidia is a great example — their stock price has risen dramatically because such processing units are ideal for artificial intelligence systems. Some even say that Nvidia’s lead in enabling AI computing is nearly impossible to replicate [13].
Data is the next big resource that has grown in capacity. It has been estimated that in the year 2017 alone we will produce more data than in the last 5,000 years of humanity combined [14]. Data for artificial intelligence is like air, food, and water for us humans. The first fields which started to generate significant amounts of data were physics and biology — LHC in CERN, Switzerland and DNA sequencing followed by the explosion in the digitisation of life phenomenon.
One other area where data is continually generated at a large scale is in the energy sector.
Predicting renewable energy. The production of energy from renewable sources is growing rapidly. With the advancement of technology development harnessing energy from wind, sun, hydro, amongst others it is becoming more popular and economically accessible. Negative effects on the environment from energy sources such as natural gas, oil and coal have further accelerated this shift.
The United Kingdom’s move to renewable energy reached a new milestone in 2016. Grant Wilson, teaching and research fellow at the University of Sheffield details the shift in a widely circulated piece on The Conversation [15]. According to Wilson, in 2016, just 9.3% of British (not UK — as Northern Ireland is calculated separately) electricity was generated from coal, down from more than 40% in 2012. That would be the lowest percentage of coal that has even been provided in the system’s 100 year history and the lowest absolute quantity burnt since the start of World War II [15].
The new record capacity of electricity comes from renewable energy, mainly from wind and solar power.
However, the biggest challenge with renewables is that energy production is intermittent. The production depends on weather conditions, such as the wind blowing or sun shining. Unlike conventional power, this means such sources cannot necessarily meet surges in demand.
Valentin Robu, a lecturer in Smart Grids at the university of Heriot-Watt discusses how AI can provide a solution and ‘future proof the grid’ [16].
There has been a lot of research studying accuracy and prediction capabilities. A paper from the office of Energy Efficiency & Renewable Energy discusses a multi-scale, multi-model machine learning solar forecasting technology [17].
Solar is not the only forecasting that is tackled by researchers. A talk by Andy Clifton from NREL National Wind Technology Centre in the US discusses machine learning applications in modelling power output inside the wind turbines shows promising results [18]. The methods described in the talk are regression tree based algorithms, however, the results present a compelling case for further explorations in wind energy forecasting at both an individual rotor and subsequently for the whole plant.
A paper by Gul M Khan from University of Engineering and Technology in Peshawar describes neural network approaches in creating power generation predictions of wind based power plants. The results show the predictions from a single hour up to a year with mean absolute percentage error as low as 1.049% for a single day hourly prediction [19].
In 2015, IBM was able to show an improvement of 30% for solar forecasting while working with the U.S. Department of Energy SunShot Initiative [20]. The self-learning weather model and renewable forecasting technology integrated large data-sets of historical data and real-time measurement from local weather stations, sensor networks, satellites, and sky image cameras. The platform is exploring how to address forecasting challenges in wind and hydro-power plants.
Nils Treiber and his colleagues from the University of Oldenburg discuss how machine learning can be used to predict wind power [21]. Their study focuses on predictions for individual turbines and then how entire wind parks can predict production from a matter of seconds to hours. They compare their results to a persistence model and show an increase in accuracy of over 24% [21].
A survey by Kasun S. Pereral and colleagues from the Technical University Dresden and the Masda Institute of Science and Technology in Abu Dhabi discuss the need for accurate forecasting and its implications on balancing the grid [22]. The creation of a ‘smart grid’ is discussed which touches upon the need to identify renewable energy plant locations, and integration points and sizes. Machine learning has become a tool for strategic planning and policy making for renewable energy.
The need for a smart grid. The first grid was created by Thomas Edison in 1882 as the Pearl Street Station plant in lower Manhattan which powered 59 customers. The customer base has since increased to hundreds of millions of users, but its overall structure and approach still has not fundamentally changed. The grid consists of a vast network of transmission lines, distribution centres and power plants.
In order to adapt to the intermittent nature of the renewable energy generation there has been a world-wide effort to modernise the grid. The U.S. Department of Energy has made supporting the ‘smart grid’ a national policy goal, which includes a ‘fully automated power delivery network that monitors and controls every consumer and node, ensuring a two-way flow of electricity and information’ [23]. It has been reported that in the last seven years the department has invested over $4.5 billion into the smart grid infrastructure. Part of the investment is focused on installing 15 million smart metres and monitoring energy usage per device in order to alert utilities of local blackouts. This programme is estimated to limit the rise in peak electricity loads on the grid to only 1%. This is especially important knowing that the total U.S. energy demand is expected to increase by 25% in 2050 [23].
Michael Bironneau, the technical director at the UK’s Open Energi, a company that gives energy users the power to participate in the energy market has been exploring the future of the grid:
“In the UK alone, we estimate there is 6 gigawatts of demand-side flexibility which can be shifted during the evening peak without affecting end users. Put into context, this is equivalent to roughly 10% of peak winter demand and larger than the expected output of the planned Hinkley Point C — the UK’s first new nuclear power station to be built in generations. Artificial Intelligence can help us to unlock this demand-side flexibility and build an electricity system fit for the future; one which cuts consumer bills, integrates renewable energy efficiently, and secures our energy supplies for generations to come.” [24].
Quirin Schiermeier in her Nature column last year explores the efforts in Germany to modernise the grid [25]. She quotes Malte Siefert, a physicist at the Fraunhofer Institute for Wind Energy and Energy System Technology in Kassel, Germany, and a leader on the projectcalled EWeLiNE:
“To operate the grid more efficiently and keep fossil reserves at a minimum, operators need to have a better idea of how much wind and solar power to expect at any given time”.
The German government has promised that by 2050 at least 80% of the country’s electricity will come from renewables. The challenge is that on calm and cloud days grid operators still need to use conventional power stations to meet the expected demand. The reverse applies on windy and sunny days. The grid operators must swiftly order coal and gas-fired power stations to reduce their output.
Quirin notes that such requests, called re-dispatches, cost German customers more than €500 million (US$553 million) a year because grid operators must compensate utility firms for adjustments to their inputs [25]. In addition, this ends up generating needless carbon dioxide emissions if the operators generate extra power that is not used. Renate Hagedorn, a meteorologist with the German weather service in Offenbach comments that:
“It is quite a concern that renewable energy here is expanding so fast without a proper database for an accurate power forecast” [25].
In the EWeLiNE project machine learning models are used to predict power generation over the next 48 hours. The team checks these powerforecasts against what actually materialises, and machine learning then improves the predictive models. This closes the learning loop and enables AI to be highly effective [25].
The National Centre for Atmospheric Research (NCAR) in Boulder, Colorado, USA has embarked on a similar project to in the one in Germany. It was started in 2009, and now is operational in eight US states [25].
Drake Bartlett, a renewable-energy analyst with Xcel Energy, the utility firm with the highest total wind capacity in the United States says that:
“The number of forecasting errors has dropped since 2009, saving customers some US$60 million and reducing annual CO2 emissions from fossil-reserve power generation by more than a quarter of a million tonnes per year” [25].
Understanding consumers. The third pillar for creating a stable, scalable, and smart energy system is understanding energy consumers.
Energy, like any other product has seen a rise in differentiation in terms of brands, usage plans and sources of energy.
Customers are more vocal about their preferences in terms of environmental impact that energy producers have. Consumer opinion and choices have a tremendous impact on the energy sector.
Consumers also produce continuous stream of data that comes through the power grid itself. There has been a significant push by utility providers to install smart meters. The meters are able to send the information to the utility providers on sometimes even hourly basis. It not only helps to predict the network load, but also predict consumption habits. Someone who is more of a ‘night owl’ and tends to work at night will have a completely different energy usage pattern when compared to someone who enjoys their 6am run in the morning.
There are over 7.4 billion people on the planet and no two individuals are the same. However, grid managers, utilities companies and governments still see people through a simplistic understanding of geography and demographic based segments.
Understanding the consumer’s habits, values, motivations, and personality helps to further bolster the balancing and effectiveness of a smart grid. It also allows for creating policies more effectively and enables an understanding of the human motivations associated with renewable energy adoption.
Genus Artificial Intelligence focuses on analysing first and third-party data about consumers to help organisations understand people. This understanding allows engaging with audiences in an emotionally intelligent way at scale. Genus AI strives to make human level emotional intelligence a reality and helps to deploy to inform and have a positive impact on the real world.
Appreciating individual level consumer differences in the context of energy platforms will unlock the next phase of optimisation and forecasting.
Genus AI is proud to work with WePower, an energy trading market powered by blockchain technology to achieve this goal [27]. With ongoing utilisation, the system will become more and more accurate, and will enable the further evolution, success and growth of the WePower platform.
The use of AI in the smart energy network will enable the long-awaited transition to fully decarbonised energy production and consumption.
References:
[1] http://www.bp.com/en/global/corporate/bp-magazine/innovations/artificial-intelligence-in-the-energy-industry.html
[2] https://www.gatesnotes.com/About-Bill-Gates/Dear-Class-of-2017
[3] http://sitn.hms.harvard.edu/flash/2017/artificial-intelligence-will-revolutionize-energy-industry/
[4] https://www.dailydot.com/debug/google-deepmind-ai-go/
[5] https://www.ft.com/content/27c8aea0-06a9-11e7-97d1-5e720a26771b
[6] http://www.cityam.com/260742/googles-deepmind-talks-national-grid-apply-ai-energy-use
[7] http://www.businessinsider.com/google-deepmind-wants-to-cut-ten-percent-off-entire-uk-energy-bill-using-artificial-intelligence-2017-3
[8] https://www.marketwatch.com/story/ibm-finally-reveals-why-it-bought-the-weather-company-2016-06-15
[9] http://fortune.com/2016/09/14/data-machine-learning-solar/
[10] https://medium.com/@evanbaker/predicting-solar-energy-production-with-machine-learning-19fcab295e58
[11] https://www.youtube.com/watch?v=bljciQEsXBU
[12] https://www.tensorflow.org/
[13] http://www.barrons.com/articles/nvidias-lead-in-a-i-nearly-impossible-to-replicate-says-evercore-1505495255
[14] https://appdevelopermagazine.com/4773/2016/12/23/more-data-will-be-created-in-2017-than-the-previous-5,000-years-of-humanity-/
[15] https://theconversation.com/the-year-coal-collapsed-2016-was-a-turning-point-for-britains-electricity-70877
[16] https://theconversation.com/why-artificial-intelligence-could-be-key-to-future-proofing-the-grid-71775
[17] https://energy.gov/eere/sunshot/watt-sun-multi-scale-multi-model-machine-learning-solar-forecasting-technology
[18] https://www.nrel.gov/docs/fy13osti/58314.pdf
[19] http://ieeexplore.ieee.org/document/6889771/
[20] https://energy.gov/eere/success-stories/articles/eere-success-story-solar-forecasting-gets-boost-watson-accuracy
[21] https://link.springer.com/chapter/10.1007/978-3-319-31858-5_2
[22] https://link.springer.com/chapter/10.1007/978-3-319-13290-7_7
[23] https://energy.gov/oe/activities/technology-development/grid-modernization-and-smart-grid
[24] http://www.openenergi.com/artificial-intelligence-future-energy/
[25] https://www.nature.com/polopoly_fs/1.20251!/menu/main/topColumns/topLeftColumn/pdf/535212a.pdf
[26] www.genus.ai
[27] https://wepower.network
| Artificial Intelligence and the future of energy | 373 | artificial-intelligence-and-the-future-of-energy-105ac6053de4 | 2018-06-19 | 2018-06-19 02:58:20 | https://medium.com/s/story/artificial-intelligence-and-the-future-of-energy-105ac6053de4 | false | 3,418 | Blockchain-based green energy financing and trading platform | null | WePowerNetwork | null | WePower | wepower | GREEN ENERGY,BLOCKCHAIN,SMART CONTRACTS,DECENTRALIZATION,RENEWABLE ENERGY | WePowerN | Machine Learning | machine-learning | Machine Learning | 51,320 | Tadas Jucikas | Co-founder, CEO at Genus AI where we build artificial intelligence which enables businesses to interact with their customers in an emotionally intelligent way. | a419f87360b0 | TadasJucikas | 57 | 14 | 20,181,104 | null | null | null | null | null | null |
|
0 | jupyter notebook Traffic_Sign_Classifier.ipynb
docker pull udacity/carnd-term1-starter-kit # for cpu
sudo docker build -t tf_py3_cv2 -f Dockerfile.gpu .
docker run -it --rm -p 8888:8888 -v `pwd`:/src udacity/carnd-term1-starter-kit
sudo nvidia-docker run -v `pwd`:/notebooks -it --rm -p 8888:8888 tf_py3_cv2
gray_image_normalized = (gray_image - 128)/ 128
| 6 | null | 2018-01-26 | 2018-01-26 07:14:26 | 2018-01-26 | 2018-01-26 07:19:04 | 33 | false | en | 2018-01-26 | 2018-01-26 21:37:54 | 16 | 105db8c55304 | 10.25283 | 2 | 0 | 0 | Build a Traffic Sign Recognition Project | 5 | Traffic Sign Recognition via Neural Network
Build a Traffic Sign Recognition Project
The goals / steps of this project are the following:
Load the data set (see below for links to the project data set)
Explore, summarize and visualize the data set
Design, train and test a model architecture
Use the model to make predictions on new images
Analyze the softmax probabilities of the new images
Summarize the results with a written report
Usage of my code
Pull a docker container with tensorflow gpu and python3
CPU: use udacity-carnd.
For the use of GPU, AWS EC2 GPU g2.x2large instance can be used. Install docker-ce, nvidia-docker2 on the instance. Plus, I need to build a docker image locally to support cv2, etc.
Launch this workspace
CPU only
GPU
Rubric Points
Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
Writeup / README
1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. You can use this template as a guide for writing the report. The submission includes the project code.
Here is a link to my project code
Data Set Summary & Exploration
1. Provide a basic summary of the data set. In the code, the analysis should be done using python, numpy and/or pandas methods rather than hardcoding results manually.
I used the pandas library to calculate summary statistics of the traffic signs data set:
The size of the training set is 34799.
The size of the validation set is 4410.
The size of the test set is 12630.
The shape of a traffic sign image is (32, 32, 3).
The number of unique classes/labels in the data set is 43.
2. Include an exploratory visualization of the dataset.
Here is an exploratory visualization of the data set. It is a bar chart showing the data distributions, where x-axis is the indices of labels and y-axis represents the size of samples for one category/label.
Train Set Distribution
Test Set Distribution
Validation Set Distribution
Design and Test a Model Architecture
1. Describe how you preprocessed the image data. What techniques were chosen and why did you choose these techniques? Consider including images showing the output of each preprocessing technique. Pre-processing refers to techniques such as converting to grayscale, normalization, etc. (OPTIONAL: As described in the “Stand Out Suggestions” part of the rubric, if you generated additional data for training, describe why you decided to generate additional data, how you generated the data, and provide example images of the additional data. Then describe the characteristics of the augmented training set like number of images in the set, number of images for each class, etc.)
As a first step, I decided to convert the images to grayscale because the color does not help the sign recognition based on my experiment. Here the corresponding examples of traffic sign images before and after grayscaling.
The examples of RGB images are,
The corresponding grayscale images are shown as follows,
As a last step, I normalized the image data because zero-mean data will provide better-conditioned distribution for numerical optimization during the training. The equation is
The normalized images are shown as follows
I decided to generate additional data because I found one misprediction among the 5 new signs from the website is due to the scaling issue. Due to the time constraint of this project, I used only the scaling and cropping method to generate extra data samples to help recognize this kind of images. Here the examples of original images and the augmented images:
Augmented train set has the sample distribution as
Compared to the original distribution of train set.
2. Describe what your final model architecture looks like including model type, layers, layer sizes, connectivity, etc.) Consider including a diagram and/or table describing the final model.
My final model consisted of the following layers:
3. Describe how you trained your model. The discussion can include the type of optimizer, the batch size, number of epochs and any hyperparameters such as learning rate.
To train the model, I used an Adam optimizer discussed in the lecture.
The batch size is 128.
The number of epochs is 51.
The learning rate is 0.0008.
The keep probability of dropout is 50.0%.
4. Describe the approach taken for finding a solution and getting the validation set accuracy to be at least 0.93. Include in the discussion the results on the training, validation and test sets and where in the code these were calculated.
My final model results were:
training set accuracy of 99.9%.
validation set accuracy of 95.7%.
test set accuracy of 94.0%.
new signs accuracy of 100.00% (80% without augmented train sets)
Iterative approach was chosen
What was the first architecture that was tried and why was it chosen?
Answer: I started with LeNet since the lecture mentioned it has pretty good performance in this kind of task.
What were some problems with the initial architecture?
Answer: The accuracy is not high enough, only around 89% for the test set.
How was the architecture adjusted and why was it adjusted?
Answer:
I increased the filter depth to capture more pattern information from the inputs.
I added dropout for the fully-connected layers to avoid the overfitting.
Which parameters were tuned? How were they adjusted and why?
Answer: I tuned the Dropout’s keep probability. I set 0.7 then decreased it to 0.5. Check the Configuration and Performance Table added below.
Configuration and Performance Table
The training performance figure is attached.
Test a Model on New Images
1. Choose five German traffic signs found on the web and provide them in the report. For each image, discuss what quality or qualities might be difficult to classify.
Here are five German traffic signs that I found on the web:
For example, the following image has the wrong prediction from the neural network trained without the augmented data set. This is actually a priority road sign, which is scaled and cropped. It fooled the neural network.
After adding the generated data with scaling images based on the original train set, the prediction is right then.
2. Discuss the model’s predictions on these new traffic signs and compare the results to predicting on the test set. At a minimum, discuss what the predictions were, the accuracy on these new predictions, and compare the accuracy to the accuracy on the test set
Here are the results of the prediction with augmented data sets:
The result of those new signs is better than the accuracy of test set.
Note that without augmented data, the model was able to correctly guess 4 of the 5 traffic signs, which gives an accuracy of 80%.
This is the reason why I added scaled images as augmented data samples to help the deep neural network to get trained.
3. Describe how certain the model is when predicting on each of the five new images by looking at the softmax probabilities for each prediction. Provide the top 5 softmax probabilities for each image along with the sign type of each probability.
Following figures should the top 5 softmax probabilities.
The confidence of each prediction is pretty high.
Summary
Based on the test set.
The Precision of the model is 91%.
The Recall Score of the model 94%.
The confusion matrix of the model
Further Steps:
Add more diverse samples to the train set. Due to the time constraint, I only added the data with different scaling factors. Actually, we can rotate the images, use different blur versions, and so on.
Balance the sample data distributions in the train set.
Train the IJCNN’11 paper mentioned.
Visualization of the neural network’s state.
Following paragraphs are from the comments based on my project submission by Udacity Reviewers. Thank you for the code review!
Meets Specifications
Brilliant Learner,
Thank you for the prompt resubmission. By carefully going through the project, it shows a lot of effort, diligence and above all, understanding of the underlying concepts. Well done!!.
You have successfully passed all the rubrics of this project excellently on the very first submission. This is very uncommon and I congratulate you for this. Do not forget, there is more to SDC and you are just getting started. Keep up the hard work and determination. It was my pleasure reviewing this wonderfully implemented project
Files Submitted
The project submission includes all required files.
Well done . The project submission has successfully included the necessary files which include:
Ipython notebook with code
HTML output of the code and a
A markdown write-up report
Dataset Exploration
The submission includes a basic summary of the data set.
Good job performing basic data summary !!! You rightly used python libraries such as pandas and numpy and some methods to perform operations such as extracting the shape of the images, the number of examples in the training set, the number of examples in the testing set and the number of unique classes in the dataset.
The submission includes an exploratory visualization on the dataset.
Excellent job done in the visualization of the data set. The notebook shows images with class titles and also bar charts to further explore the data set.
Design and Test a Model Architecture
The submission describes the preprocessing techniques used and why these techniques were chosen.
The report explicitly describes the preprocessing techniques used for example grayscale and data normalization. It goes further to explain why these techniques where chosen. Brilliant work done.
Suggestions.
In the report, you mentioned grayscale and normalization as techniques. However, I also invite you to read the following topics:
How can I convert an RGB image into grayscale in Python?
Thresholding of a grayscale Image in a range;
Image Processing with Python — RGB to Grayscale Conversion;
Normalizing images in OpenCV;
Normalization in Image processing;
sklearn.preprocessing.normalize.
The submission provides details of the characteristics and qualities of the architecture, including the type of model used, the number of layers, and the size of each layer. Visualizations emphasizing particular qualities of the architecture are encouraged.
Great work. The report shows details of the characteristics and qualities of the architecture used and states the number of convolution and dropout layers used.
Suggestions.
We see that your final model is based on the LeNet Architecture. More reading about this topic is provided here:
LeNet — Convolutional Neural Network in Python;
Convolutional Neural Networks LeNet;
Convolutional Neural Networks CNNs/ConvNets.
The submission describes how the model was trained by discussing what optimizer was used, batch size, number of epochs and values for hyperparameters.
Good work using the Adam Optimizer for this project. The following hyperparameters were used in the project with their corresponding values.
Suggestions.
Some documents and topics about Adam optimizer:
Tensorflow: Using Adam optimizer;
tf.train.AdamOptimizer ;
Adam: A Method for Stochastic Optimization.
The submission describes the approach to finding a solution. Accuracy on the validation set is 0.93 or greater.
Nice work achieving a maximum validation set accuracy of 0.95 as shown in the image below.
Suggestions
Please, check on Early stopping to avoid overfitting.
Test a Model on New Images
The submission includes five new German Traffic signs found on the web, and the images are visualized. Discussion is made as to particular qualities of the images or traffic signs in the images that are of interest, such as whether they would be difficult for the model to classify.
The submission includes five new German Traffic signs found on the web, and provide a good discussion on the quality of the images.
The submission documents the performance of the model when tested on the captured images. The performance on the new images is compared to the accuracy results of the test set.
The submission correctly documents the performance of the model, and the performance on the new images is compared to the accuracy results of the test set.
The top five softmax probabilities of the predictions on the captured images are outputted. The submission discusses how certain or uncertain the model is of its predictions.
The top five softmax probabilities of the predictions on the captured images are perfectly outputted. Impressive work!
| Traffic Sign Recognition via Neural Network | 2 | traffic-sign-recognition-via-neural-network-105db8c55304 | 2018-01-27 | 2018-01-27 03:56:08 | https://medium.com/s/story/traffic-sign-recognition-via-neural-network-105db8c55304 | false | 2,081 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Hao Zhuang | Hacked linear algebra and matrix algorithms, applied machine learning, design automation | UCSD CS PhD | linkedin.com/in/zhuangh | 9f155190ea1e | zhuangh | 72 | 509 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-08 | 2018-09-08 13:50:10 | 2018-09-08 | 2018-09-08 15:22:04 | 10 | false | en | 2018-09-08 | 2018-09-08 22:45:13 | 1 | 105e405a4f15 | 3.499057 | 0 | 0 | 0 | To train a regression model we need find the optimum θ, which would minimize our MSE(mean square error). | 5 | Comparison of algorithms for Linear regression.
To train a regression model we need find the optimum θ, which would minimize our MSE(mean square error).
Note: This blog is only to put together information from various resources.
This θ can be found in four ways.
Finding θ using the normal equation, which is what the Scikit learn library’s LinearRegression does.
Linear regression model prediction
Linear regression model prediction
Normal Equation
Although the computational complexity of this algorithm with respect to the number of instances in O(m),m being the number of instances,the computational complexity of this algorithm w.r.t the number of features is O(n³),which means that if the number of features are doubled, the computational time would increase by 8 times.So this algorithm is best suited when we have less number of features.
2.Finding θ using gradient descent/Batch gradient descent.
cost function
partial derivatives of cost function
Gradient vector of the cost function
Gradient Descent step.(eta is the learning rate)
Batch gradient descent/gradient descent, computes the gradient of the cost function w.r.t. to the parameters θ for the entire training dataset.
If the learning rate is too slow, it may take a long time to reach the optimum solution, but if the learning rate is too high then the algorithm diverges and moves further away from the optimum solution.Hence we need to use good eta and this eta can be chosen using grid search method or using the cross validation set.
Gradient Descent scales well with the number of features; training a Linear Regression model when there are hundreds of thousands of features is much faster using Gradient Descent than using the Normal Equation.
Please refer to this link for better understanding:http://ruder.io/optimizing-gradient-descent/index.html#gradientdescentvariants
3.Finding θ using stochastic gradient descent:
The main problem with Batch Gradient Descent is the fact that it uses the whole training set to compute the gradients at every step, which makes it very slow when the training set is large.On the other hand,Stochastic Gradient Descent just picks a random instance in the training set at every step and computes the gradients based only on that single instance.This makes stochastic gradient descent algorithm much faster as it only considers a single instance at every iteration.
Hence, it is better to use stochastic gradient descent for huge training sets.
On the other hand, due to its stochastic (i.e., random) nature, this algorithm is much less regular than Batch Gradient Descent: instead of gently decreasing until it reaches the minimum, the cost function will bounce up and down, decreasing only on average. Over time it will end up very close to the minimum, but once it gets there it will continue to bounce around, never settling down . So once the algorithm stops, the final parameter values are good, but not optimal.Hence we may use mini-gradient descent method.
Please refer to this link for better understanding:http://ruder.io/optimizing-gradient-descent/index.html#gradientdescentvariants
4.Finding θ using Mini-batch gradient descent method.
Instead of using a single instance or the entire dataset for updating θ, we compute the gradients on small random sets called mini-batches.
Please refer to this link for better understanding:http://ruder.io/optimizing-gradient-descent/index.html#gradientdescentvariants
Summary:
The computational complexity of normal equation is very high when the number of parameters are more.
Though,batch gradient descent gives us optimum solution,it iterates over the entire data set and hence takes a lot of time to reach the optimum value.
Though Stochastic gradient descent algorithm is much faster than batch gradient descent, it gives us answers close to the optimum value.
Mini-batch gradient descent has the advantages of both batch and stochastic gradient descent and gives answers that are much closer to the optimum value than those given by stochastic gradient descent.
References:
Book:Hands on machine learning with scikit-learn and tensorflow.
| Comparison of algorithms for Linear regression. | 0 | comparison-of-algorithms-for-linear-regression-105e405a4f15 | 2018-09-08 | 2018-09-08 22:45:13 | https://medium.com/s/story/comparison-of-algorithms-for-linear-regression-105e405a4f15 | false | 596 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Harshita Vemula | Grad student at UT Austin. | 7d5d88d735e8 | harshita.vemula | 0 | 18 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-08 | 2017-12-08 16:14:58 | 2017-12-08 | 2017-12-08 16:23:07 | 1 | false | en | 2017-12-08 | 2017-12-08 16:23:07 | 7 | 105ea198f998 | 7.339623 | 1 | 1 | 0 | Three ways to talk about our automated future with the people it displaces | 3 | No Luddites Left Behind
Three ways to talk about our automated future with the people it displaces
A fellow audience member stood up after a talk on the Socioeconomic Impact of AI in New York City.
“I work for the cable company from home. I make sales calls every day on commission,” he said. He appeared over 50, and as though he likely made his calls from a homeless shelter: unkempt, old clothes, a distinct odor about him.
“How is AI going to affect my job?” he asked.
The panelists hesitated, silence like eggshells as the man gazed up at them in earnest. Before he spoke, the talk had focused matter-of-factly on retraining, reskilling, growth mindsets, and constant learning. It starts with our education system, the panelists said. We need to grow curious enough to re-learn new skills every few years, they noted. Today’s jobs will not exist tomorrow, they added. Get used to this new reality, they insisted.
But how exactly do you talk about this to the man standing in front of you, a man with a job that may soon be automated out of existence?
We often tout the inevitability of innovation and preach the need to keep up, or else. But to prepare for a future that promises mass displacement and unemployment, we must speak about automation and AI with more care.
Rapid-fire jargon, fear-mongering headlines, and dismissiveness of those not already invested in tech will only divide us and create greater chasms between the haves and have-nots. By using plain English to explain AI, emphasizing optimism, and educating with inclusion for workers everywhere, we can create a future that ensures no modern-day Luddites are left behind.
Automation: Not always embraced with open arms
We’ve all heard the stats: 45 percent of activities people do at work can be automated, and 60 percent of all jobs could see 30 percent or more of their activities automated. 49 percent of time spent at work could be automated, too — the equivalent of 1.1 billion workers globally.
Millions of truck drivers will let go of their obsolete steering wheels. Hoards of data analysts will abandon error-riddled spreadsheets and their manual formulas. Business decisions will land in the laps of automated algorithms, not executives. And that salesman in the audience may likely hang up his phone soon.
Certainly all of this change will save businesses time, but it won’t save workers’ morale. Behind the buzzwords, forecasts, and statistics are the faces of everyday people who just want to keep their jobs. Even as many workers will strive to learn, understand, and prepare — much like the man at the AI panel did — many won’t have the option to remain employed. The consequences could be dire.
High-skilled, well-educated people will fare better in this future. After all, 13 percent of the U.S. population still doesn’t use the internet. But many Americans don’t think they need to adapt, with 80 percent saying their job definitely or probably will exist in its current form in 50 years. And while 46 percent believe AI will harm people by taking away jobs, just 23 percent believe it will have serious, negative implications.
So while talks of our automated future are framed as exciting learning opportunities to the educated knowledge worker — Take a MOOC! Learn to code! — the reality is that these initial changes will hurt most of the nation. Sure, relieving an executive of everyday burdens to focus on strategy is a plus. But when your skillset was proud mastery of those tedious tasks, automation doesn’t sound so freeing.
I think about my welder dad, who came home every night with fresh burns seared onto his arms. No matter that robot limbs can be sprayed with fire and never file for worker’s compensation, and will never know the pain of pink slips or fear of pay cuts. My dad wants to keep his job, scars and all. So, too, do most salesmen want to keep cold calling.
What, then, can we do to move society forward — to reap the many gains of an automated future, without the suffering at the hands of millions more?
Peter Drucker famously said that “efficiency is doing things right; effectiveness is doing the right things.” The pace of tech change pushes us toward efficiency, but doing the right thing in a world of AI and automation is not simply a matter of speed, but a matter of humanity. In a time of ideological extremes, distrust of authorities, and divisive rhetoric that pits progressive city-dwellers against the small-town working class, we need to educate everybody on the impact of technology — now, more than ever.
Three ways to talk about our automated future to those it will displace
1. Use plain English to explain the technology. As complex as concepts like machine learning and neural networks sound at first blush, they are easily grasped when explained with patience, empathy, and examples that help laypeople understand. I’ve noticed that even the most technical, MIT-trained PhDs slow their speech and offer helpful analogies to people who bravely tell them, “I didn’t understand what you meant.” But many people don’t have the chance to express their confusion, or will simply stop listening when they’re unable to understand. Lost opportunities to educate, connect, and employ more people in advanced technology can stem simply from poor word choices, and will only compound when automation continues to proliferate in use.
It should be our duty to society and our economic future to clearly explain AI and artificial intelligence to children, to the undereducated, and to the people whose jobs will be dismantled by technology first.
What does this look like? It’s as simple as using relatable illustrations to show how an algorithm differentiates between two objects or words. It might be that you only use technical jargon or buzzwords when accompanying them with real-life examples, e.g. “Big Data might include something like the records of every vote cast on a ballot in the U.S. from the last century.” Or that you use Warren Buffet’s trick of pretending he’s writing or speaking to his sisters, Doris and Bertie, whenever he’s tempted to use unclear verbiage.
At a recent AI Meetup I attended, I noticed the group leaders’ care in painstakingly defining many of the terms they used. It made a difference; at the next meeting, there were more newcomers, women, and people of color than ever. When we use words that most people understand, we invite more people in.
2. Speak with optimism, not simply fear. Imagine how empowering it’d be if we replaced the robots-rendering-us-prehistoric narrative with, instead, a narrative of hope: Your core human strengths are more valuable than ever. You have an important role to play. Your skills aren’t worthless. By pairing our automation fears with automation optimism, we can ensure we are speaking empathetically to those who may suffer from job displacement. For example, a bartender may someday lose her work to a robot, but her conversational flair and cocktail-recipe knowledge can aid even a Watson-run restaurant. A marketing analyst whose handpicked research could someday be collected by algorithms could, instead, become a guru at translating and tailoring insights for individual executives’ needs.
After all, scare tactics don’t work: A couple years ago, I was in the backseat of a shared Lyft when my fellow passenger asked the driver his thoughts on self-driving cars. “It’s not gonna work, man,” the cabbie replied. “We’re gonna f*ck with them on the road. Those cars won’t be safe, because we’ll cut them off every stop sign, every stoplight.”
His words reminded me of Ned Ludd’s followers, who, blindsided by new technology, protested progress by destroying the machines. This driver resisted against self-driving cars taking his fares, rendering him an accessory. I wish I’d interjected to assure him that he had so many job options he might not even realize — a custom tour guide for self-driving buses, or radio host for new passenger listening opportunities. I speak up now: Those of us armed with tech savvy should advise those who are fearful, and take responsibility to help them find their silver lining potential, no matter the dismal headlines we’ve all heard.
3. Be inclusive in addressing everyone’s automated future. Our optimism can’t be disconnected from reality, however. Glorifying future jobs like training algorithms and working alongside robot colleagues does not spell opportunity for the working class, but rather depicts a sci-fi movie scene they can’t begin to fathom on their factory floors. So while a classroom of corporate learners may nod their heads and anticipate next quarter’s coding course, most Americans balk at or dismiss this speak.
By spreading the gospel of automation more widely and inclusively, we can help educate rather than ignore. Let’s give talks about AI in rural libraries, public schools, prisons, town halls, and local Manpowers, describing how everyday jobs might change and what that could look like. Let’s explain upskilling methods like MOOCs and programming languages by relating which workers can benefit from this knowledge, and how they can use it. Let’s not forget that many still don’t have smartphones, broadband internet, or university degrees.
Rather than glossing over low-wage jobs as simply collateral damage of innovation, let’s talk to those whose jobs may disappear. Let’s look them in the eyes and address their worldviews, circumstances, and feelings. It’s the right, and human, thing to do.
The audacity of automation
Back at the AI panel, the panelists carefully weighed in on the gentleman in the audience’s question.
The panel didn’t say the man wouldn’t have a job in 2 years, or that it was time for him to learn to code from Khan Academy. Or that his company would go bankrupt in a world of automation and AI. The blunt language from the panel’s earlier discussions softened into spoonfed advice that no longer wrote off a person’s stagnant skills as a thing of a past, but framed their knowledge as a hopeful base for the future. This time, they spoke with care.
They told the man told that his job would likely get easier over time: better data would help him target customers who are more likely to buy, so he’d waste less time. He would see more impact in his work, they said, through reaching the people who need the product or service most. Eventually, they told him, he could automate his calls so that he could just handle the parts that require human intervention — checking in to see how the service was satisfying the customer, maybe. Altogether, his job could become more enjoyable, more financially rewarding, and more efficient.
The man took voracious notes, sat down, then stayed behind after to mingle with the other attendees. I noticed people reluctantly shook his soiled hands as he networked, collecting business cards with gumption. I wondered if he felt like his question was heard, if he felt he had a place in this automated future. If, at the end, he felt like he belonged.
I originally wrote this piece for the Peter Drucker essay challenge on human prosperity in a changing world. I scored 8th place. Thank you for reading about this important topic.
| No Luddites Left Behind | 1 | no-luddites-left-behind-105ea198f998 | 2018-03-14 | 2018-03-14 04:41:31 | https://medium.com/s/story/no-luddites-left-behind-105ea198f998 | false | 1,892 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Angela Pham | @angelapham | ca6eab26fc24 | angelapham | 186 | 242 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-08 | 2017-12-08 11:04:41 | 2017-12-08 | 2017-12-08 13:24:53 | 2 | false | en | 2017-12-08 | 2017-12-08 17:23:48 | 2 | 105fd9360f09 | 1.647484 | 0 | 0 | 0 | The rise of the machines | 5 |
The Reality of Google’s DeepMind Becoming a Chess Master in 4 Hours
The rise of the machines
2017 has been a year that will be remembered for many things. Currently, there is the prospect of a first-step breakthrough on Brexit negotiations and as I write, Bitcoin continues to divide opinion but continues to rise and bubbles away at a staggering £12,186 per coin.
Bitcoin price: 8th December 2017
Though cryptocurrency and blockchain technology will be prevalent in the New Year, and no doubt our Brexit negotiations will go back and forth, the most significant piece of news for me was a game between the world’s best chess-playing computer program and AlphaZero, the artificial intelligence software from the London based Google sibling, DeepMind.
In isolation, this is a quite an achievement. What is frightening and exciting is that Google’s platform self-learnt and deployed winning strategies — and won — within 4 hours!
Applying Machine Learning for the Greater Good
The difference between DeepMind’s AlphaZero and its competition, is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge.
What does this mean for us in the future? Will we be what had been predicted by Sci-Fi writers all along and become the playthings of the technological robotic gods?
Well, the future may be a little better thanks to the likes of DeepMind and how they are applying their technology and process.
Health, Wellbeing and Beyond
There are quite a few innovations that were happening before the chess match with AlphaZero.
DeepMind Health, a dedicated division that puts the services of DeepMind to patients, nurses and doctors — recently announced research work to help address breast cancer alongside the Cancer Research UK Centre at Imperial College London.
What is clear, is that AI Machine Learning is here and though frightening, will create many new challenges and more importantly, innovative new solutions to our ever changing world.
| The Reality of Google’s DeepMind Becoming a Chess Master in 4 Hours | 0 | the-reality-of-googles-deepmind-becoming-a-chess-master-in-4-hours-105fd9360f09 | 2018-04-07 | 2018-04-07 17:09:15 | https://medium.com/s/story/the-reality-of-googles-deepmind-becoming-a-chess-master-in-4-hours-105fd9360f09 | false | 335 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Reggie James | Reggie James is a seasoned internet marketing strategist. Founder of agency Digital Clarity LDN and Exec. Director of DBMM Group, Inc. NYC (Stock DBMM:OTC) | 85fdcb94ba4d | reggiejames99 | 152 | 439 | 20,181,104 | null | null | null | null | null | null |
Subsets and Splits