Northern Virginia Community College Machine Learning & Data Analytics Write a paper in APA format…in the attachment “paper 3 instructions” it shows you h

Northern Virginia Community College Machine Learning & Data Analytics Write a paper in APA format…in the attachment “paper 3 instructions” it shows you how to write the paper. The first 25 pages are an example of how the final paper will look like. On page 26 is where the assignment starts, there are 2 parts to it you only have to do part 1 question 1 which i highlighted in yellow on page 26. Here is the question:Part 1: Machine Learning and Data AnalyticsDescribe the concepts of machine learning and data analytics and how applying them to cyber security will evolve the field. Please write as much as you can and don’t forget to add references. Machine Learning and Data Analytics for Cybersecurity
Part 1: Overview of Machine Learning
Defining Machine Learning
Machine learning is a method of giving a computational device the ability to learn. It is different from
traditional programming whereby the structure is logical, explicit, and conditional. Machine learning
uses neural networks (among other techniques) and reinforcement learning to teach the computational
device between what is correct or incorrect. One of the prerequisites of machine learning is data. Data is
important because to teach the computational device how to learn, we need data to feed it.
For example, if we wanted to teach a computation device to identify pictures of dogs, we would need to
submit pictures of dogs and pictures without dogs. The basics of a neural network can be found inside
the system, and the neural network coupled with reinforced learning allows us to teach the computer.
Essentially, the computation device will attempt to identify the pictures with dogs. If the system
incorrectly identifies a picture, the human corrects the computational device. This is the reinforced
learning aspect. It is akin to a teacher in grade school providing feedback on a math test. Once the
system is trained, it has a high success rate of identifying pictures of dogs.
Machine Learning and Cybersecurity
Let’s look at machine learning in the context of cybersecurity.
Machine learning is being applied to many fields across industry, one of which is cybersecurity. The key
to success is data. In this field, there is plenty of data. Each device in the network can be configured to
generate logs. Unfortunately, it’s virtually impossible for a human to review all the potential logs
generated within a network. This is where machine learning comes in. For example, machine learning
could be applied to the overall network when a breach has occurred. A log is generated and a machine
learning model (that has been trained) reacts by automatically responding to the breach. The speed at
which the system responds is much faster than having a human in the loop.
Another example would be to automatically respond to sophisticated malware that is not being
detected by traditional cybersecurity solutions. For example, when applying a heuristics-based model,
the machine learning model would detect communication protocol behavior that is out of the norm for
the network. At the very least, it would flag a human to review, and if desired, automatically prohibit the
communication.
Machine Learning Intro
NicoElNino/Essentials Collection/Getty Images
We experience machine learning every day. Spam filters, optical character recognition, and
individualized content based on Internet surfing habits all use machine learning algorithms. Machine
learning aims at “enabling computers to learn new behavior based on empirical data” (Tantawi, 2016)
without being programmed by humans. Algorithms are designed to allow the computer to learn from
experience and use that to display new behavior. Machine learning is an important component of
artificial intelligence (Tantawi, 2016).
Cognitive computing uses machine learning algorithms to analyze large amounts of data and produce
results. Internet searches use cognitive computing to analyze the user’s past search behaviors,
combined with the current search request, to return results that, based on personal data and the
application of patterns, are relevant and useful to the user.
Another example is the recommendations a user receives when on e-commerce sites, based on past
searches, keywords, purchases of not only the user, but also other users that looked at the same
product or used the same search terms. E-learning authors Mauro Coccoli, Paolo Maresca, and Lidia
Stanganelli suggest that cognitive computing can enhance student performance and ease the
instructor’s job of managing the class and learning materials (2016).
Machine learning, artificial intelligence, and cognitive computing are key players in learning analytics
and predictive modeling. Machine learning has quickly become a key skill for developers to have as part
of their toolbox.
References
Coccoli, M., Maresca, P., & Stanganelli, L. (2016). Cognitive computing in education. Journal of eLearning and Knowledge Society, 12(2). ISSN 1826-6223. Retrieved from http://www.jelks.org/ojs/index.php/Je-LKS_EN/article/view/1185/987
Tantawi, R. (2016). Machine learning. Salem Press Encyclopedia.
How the Machine Thinks – Understanding Opacity in Machine Learning Algorithms?
How the machine ‘thinks’: Understanding opacity in machine learning algorithms by Jenna Burrell from
Big Data & Society is available under a Creative Commons Attribution-NonCommercial 3.0 Unported
license. © 2016, The Author.
How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of
classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends,
market segmentation and advertising, insurance or loan qualification, and credit scoring. These are just
some examples of mechanisms of classification that the personal and trace data we generate is subject
to every day in network-connected, advanced capitalist societies. These mechanisms of classification all
frequently rely on computational algorithms and, lately, on machine learning algorithms to do this work.
Opacity seems to be at the very heart of new concerns about “algorithms” among legal scholars and
social scientists. The algorithms in question operate on data. Using this data as input, they produce an
output; specifically, a classification (i.e., whether to give an applicant a loan, or whether to tag an e-mail
as spam). They are opaque in the sense that if one is a recipient of the output of the algorithm (the
classification decision), rarely does one have any concrete sense of how or why a classification has been
arrived at from inputs. Additionally, the inputs themselves may be entirely unknown or known only
partially. The question naturally arises, what are the reasons for this state of not knowing? Is it because
the algorithm is proprietary? Because it is complex or highly technical? Or are there, perhaps, other
reasons? By distinguishing forms of opacity that are often conflated in the emerging interdisciplinary
scholarship on this topic, I seek to highlight the varied implications of algorithmic classification for
longstanding matters of concern to sociologists, such as economic inequality and social mobility. Three
distinct forms of opacity include (1) opacity as intentional corporate or institutional self-protection and
concealment and, along with it, the possibility for knowing deception; (2) opacity stemming from the
current state of affairs where writing (and reading) code is a specialist skill and; (3) an opacity that stems
from the mismatch between mathematical optimization in high-dimensionality characteristic of machine
learning and the demands of human-scale reasoning and styles of semantic interpretation. This third
form of opacity (often conflated with the second form as part of the general sense that algorithms and
code are very technical and complex) is the focus of this article. By examining in depth this form of
opacity, I point out shortcomings in certain proposals for code or algorithm “audits” to evaluate for
discriminatory classification.
To examine this question of opacity, specifically toward the task of getting inside the algorithms
themselves, I cite existing literature in computer science, known industry practices (as they are publicly
presented), and do some testing and manipulation of code as a form of lightweight audit. Along the way,
I relate these forms of opacity to technical and non-technical solutions proposed to address the
impenetrability of machine learning classification. Each form suggests distinct solutions for preventing
harm.
So, What Is New?
The word algorithm has recently undergone a shift in public presentation, going from an obscure
technical term used almost exclusively among computer scientists to one attached to a polarized
discourse. The term appears increasingly in mainstream media outlets. For example, the professional
body National Nurses United produced a radio spot (heard on a local radio station by the author) that
starts with a voice that sarcastically declares, ”Algorithms are simple mathematical formulas that
nobody understands,” and concludes with a nurse swooping in to rescue a distressed patient from a
disease diagnosis system which makes a series of comically wrong declarations about the patient’s
condition (see https://soundcloud.com/national-nurses-united/radio-ad-algorithms). The purpose of the
public service announcement is to champion professional care (by nurses), in this case against errorprone automation. By contrast, efforts at corporate branding of the term algorithm play up notions of
algorithmic objectivity over biased human decision making (Sandvig, 2015). In this way the connotations
of the term are actively being shaped as part of advertising culture and corporate self-presentation, as
well as challenged by a related counter-discourse tied to general concerns about automation, corporate
accountability, and media monopolies (i.e., Tufekci, 2014).
While these new media narratives may be novel, it has long been the case that large organizations
(including private sector firms and public institutions) have had internal procedures that were not fully
understood by those who were subject to them. These procedures could fairly be described as
“algorithms.” What should we then make of these new uses of the term and the field of critique and
analysis emerging along with it? Is this merely old wine in new bottles or are there genuinely new and
pressing issues related to patterns of algorithmic design as they are employed increasingly in real-world
applications?
In addition to the polarization of a public discourse about algorithms, much of what is new in this
domain is the more pervasive technologies and techniques of data collection; the vaster archives of
personal data including purchasing activities, link clicks, and geospatial movement, an outcome of more
universally adopted mobile devices, services, and applications; and the reality (in some parts of the
world) of constant connectivity. But this does not necessarily have much to do with the algorithms that
operate on the data. Often it is about what composes the data and new concerns about privacy and the
possibility (or troublingly, the impossibility) of opting out.
Other changes have to do with application areas and evolving proposals for a regulatory response. The
shift of algorithmic automation into new areas of what were previously white-collar work is reflected in
headlines like, “Will we need teachers or algorithms?” (Khosla, 2012) and into consequential processes
of classification that were previously human-determined, such as credit evaluations in an effort to
realize cost-savings (as so often fuels shifts toward automation) (Straka, 2000). In the domain of credit
and lending, Fourcade and Healy point to a shift from prior practices of exclusionary lending to a select
few to more generous credit offered to a broader spectrum of society–but offered to some on
unfavorable, even usurious terms. This shift is made possible by “the emergence and expansion of
methods of tracking and classifying consumer behavior” (Fourcade and Healy, 2013: 560). These
methods are (in part) implemented as algorithms in computers. Here the account seems to suggest an
expansion of the territory of work claimed by algorithmic routines: that they are taking on a broader
range of types of tasks at a scale that they were not previously.
In this emerging critique of “algorithms” carried out by scholars in law and in the social sciences, few
have considered in much depth their mathematical design. Many of these critics instead take a broad
socio-technical approach looking at “algorithms in the wild.” The algorithms in question are studied for
the way they are situated within a corporation, under the pressure of profit and shareholder value, and
as they are applied to real-world user populations (and the data these populations produce). Thus
something more than the algorithmic logic is being examined. Such analyses are often particular to an
implementation (such as Google’s search engine) with its specific user base and uniquely accumulated
history of problems and failures with resulting parameter setting and manual tweaking by programmers.
Such an approach may not reveal important broader patterns or risks to be found classes of algorithms.
Investigating Opacity: A Method and Approach
In general, we cannot look at the code directly for many important algorithms of classification that are in
widespread use. This opacity (at one level) exists because of proprietary concerns. They are closed to
maintain competitive advantage and/or to keep a few steps ahead of adversaries. Adversaries could be
other companies in the market or malicious attackers (relevant in many network security applications).
However, it is possible to investigate the general computational designs that we know these algorithms
use by drawing from educational materials.
To do this I draw, in part, from classic illustrative examples of machine learning models, of the sort used
in undergraduate education. In this case I have specifically examined programming assignments for a
Coursera course in machine learning. These examples offer hugely simplified versions of computational
ideas scaled down to run on a student’s personal computer so that they return output almost
immediately. Such examples do not force a confrontation with many thorny, real-world application
challenges. That said, the ways that opacity endures in spite of such simplification reveal something
important and fundamental about the limits to overcoming it.
Machine learning algorithms do not encompass all the algorithms of interest to scholars now studying
what might be placed under the banner of the “politics of algorithms.” However, they are interesting to
consider specifically because they are typically applied to classification tasks and because they are used
to make socially consequential predictions such as, “How likely is this loan applicant to default?” In the
broader domain of algorithms implemented in various areas of concern (such as search engines or credit
scoring) machine learning algorithms may play either a central or a peripheral role and it is not always
easy to tell which is the case. For example, a search engine request is algorithmically driven (except for
the part [generally totally invisible to users] that may be done manually by human workers who do
content moderation, cross-checking, ground truthing and correction—
http://www.wired.com/2014/12/google-maps-ground-truth/), but search engine algorithms are not, at
their core, “machine learning” algorithms. Search engines employ machine learning algorithms for
purposes, such as detecting ads or blatant search ranking manipulation and prioritizing search results
based on the user’s location. (See the question and response on this Reddit AMA with Andrew Ng about
why companies make their algorithmic techniques public–https://www.reddit.com/r/Machine
Learning/comments/32ihpe/ama_andrew_ng_and_ adam_coates/cqbkmyb–and this Quora question
and response about how machine learning contributes to the Google search engine—
http://www.quora.com/Why-is-machine-learning-used-heavily-for-Googles-ad-ranking-and-less-fortheir-search-ranking)
While not all tasks that machine learning is applied to are classification tasks, this is a key area of
application and one where many sociological concerns arise. As Bowker and Star note in their account of
classification and its consequences that “each category valorizes some point of view and silences
another,” and that there is a long history of lives “broken, twisted, and torqued by their encounters with
classification systems,” such as the race classification system of apartheid South Africa and the
categorization of tuberculosis patients (Bowker and Star, 1999). The claim that algorithms will classify
more “objectively” (thus solving previous inadequacies or injustices in classification) cannot simply be
taken at face value given the degree of human judgment still involved in designing the algorithms–
choices which become built-in. This human work includes defining features, preclassifying training data,
and adjusting thresholds and parameters.
Opacity
Below I define a typology starting first with the matter of “opacity” as a form of proprietary protection
or as “corporate secrecy” (Pasquale, 2015). Secondly, I point to opacity in terms of the readability of
code. Code writing is a necessary skill for the computational implementation of algorithms, and one that
remains a specialist skill not found widely in the public. Finally, arriving at the major point of this article,
I contrast a third form of opacity centering on the mismatch between mathematical procedures of
machine learning algorithms and human styles of semantic interpretation. At the heart of this challenge
is an opacity that relates to the specific techniques used in machine learning. Each of these forms of
opacity may be tackled by different tools and approaches ranging from the legislative to the
organizational or programmatic and to the technical. But importantly, the form (or forms) of opacity
entailed in a particular algorithmic application must be identified in order to pursue a course of action
that is likely to mitigate its problems.
Forms of Opacity
Opacity as Intentional Corporate or State Secrecy
One argument in the emerging literature on the “politics of algorithms” is that algorithmic opacity is a
largely intentional form of self-protection by corporation’s intent on maintaining their trade secrets and
competitive advantage. Yet this is not just about one search engine competing with another to keep
their “secret sauce” under wraps. It is also the case that dominant platforms and applications,
particularly those that use algorithms for ranking, recommending, trending, and filtering, attract those
who want to “game” them as part of strategies for securing attention from the public. The field of search
engine optimization does just this. An approach within machine learning called “adversarial learning”
deals specifically with these sorts of evolving strategies. Network security applications of machine
learning deal explicitly with spam, scams, and fraud and remain opaque to be effective. Sandvig notes
that this game of cat and mouse makes it entirely unlikely that most algorithms will be (or necessarily
should be) disclosed to the public (Sandvig et al., 2014: 9). That said, an obvious alternative to
proprietary and closed algorithms is open source software. Successful business models have emerged
out of the open source movement. There are options even in “adversarial learning” such as the
SpamAssassin spam filter for Apache.
On the other hand, Pasquale’s more skeptical analysis proposes that the current extent of algorithmic
opacity in many domains of application may not be justified and is instead a product of lax or lagging
regulations. In his book The Black Box Society: The Secret Algorithms that Control Money and
Information, he argues that a kind of adversarial situation is indeed in play, one where the adversary is
regulation itself. “What if financiers keep their doings opaque on purpose, precisely to avoid or to
confound regulation?”(Pasquale, 2015: 2). About this, he defines opacity as “remediable
incomprehensibility.”
The opacity of algorithms, according to Pasquale, could be attributed to willful self-protection by
corporations in the name of competitive advantag…
Purchase answer to see full
attachment

Don't use plagiarized sources. Get Your Custom Essay on
Northern Virginia Community College Machine Learning & Data Analytics Write a paper in APA format…in the attachment “paper 3 instructions” it shows you h
Get an essay WRITTEN FOR YOU, Plagiarism free, and by an EXPERT!
Order Essay
Calculate your paper price
Pages (550 words)
Approximate price: -

Why Choose Us

Top quality papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

Professional academic writers

We have hired a team of professional writers experienced in academic and business writing. Most of them are native speakers and PhD holders able to take care of any assignment you need help with.

Free revisions

If you feel that we missed something, send the order for a free revision. You will have 10 days to send the order for revision after you receive the final paper. You can either do it on your own after signing in to your personal account or by contacting our support.

On-time delivery

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & confidential

We use several checkers to make sure that all papers you receive are plagiarism-free. Our editors carefully go through all in-text citations. We also promise full confidentiality in all our services.

24/7 Customer Support

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

Calculate the price of your order

Total price:
$0.00

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

Essays

Essay Writing Service

You are welcome to choose your academic level and the type of your paper. Our academic experts will gladly help you with essays, case studies, research papers and other assignments.

Admissions

Admission help & business writing

You can be positive that we will be here 24/7 to help you get accepted to the Master’s program at the TOP-universities or help you get a well-paid position.

Reviews

Editing your paper

Our academic writers and editors will help you submit a well-structured and organized paper just on time. We will ensure that your final paper is of the highest quality and absolutely free of mistakes.

Reviews

Revising your paper

Our academic writers and editors will help you with unlimited number of revisions in case you need any customization of your academic papers