Thursday, November 19, 2020

Well, dreams, they feel real while we're in them right?

 



Since we are both relatively new to the concepts of programming computers lets be basic. I am not going to assume that you have advanced knowledge of mathematics but do have a basic understanding. With programming it can become very convoluted with obscure mathematics concepts very quickly. Instead let’s look at the ideas of algorithmic design and data structure techniques in a more familiar setting. For my classmates, I apologize you may have heard this before.

 

If you haven’t seen the movie Inception (Warner Bros., 2010) you’ll need to see it to grasp this fully. We’ll try to keep it simple as we can, but with words like algorithmic design and structure techniques this may be tough. In the movie we are introduced to the abstract concept of entering another person’s dream to carry out some type of “adventure”. Through the movie there are scenes of increasing intensity as the characters try to carry out “mind-crimes” on other people. In the climax sequence they dive deeper than ever before by entering a dream. Then entering a dream within a dream, and again a dream within a dream within a dream. They go deeper than ever before and take a chance on not waking up in time to save themselves (another abstract idea that if you die in a dream you die for real).



Keeping all of this in mind programming can be very similar. We take the abstract concept of data on a computer and put it some use. This is the idea of data structure, and the implementation that we use to manipulate it is the algorithm based on the algorithmic design desired. So, to be clear, the structure is what we are working with, and the algorithm is what we use to manipulate it. In the movie, the data structure would be the idea of a dream and the algorithm is the people within the dream making the outcome/output what they desired.

So, as with most things computer oriented we start with the finish, what is our desired output? What do we need the program to “do”?  Once we have that output we can decide on the structure and the algorithms. There are many data structures such as stacks, lists, queues, and trees. Each of these has its own specific algorithms that it follows to maintain the data it is given. So, if you think of the structure you are also thinking of the algorithm. I told you this was a very abstract concept. So, keeping this in mind why would we have a bunch of different structures? Why not just use one for simplicity’s sake? Wouldn’t it be simpler and easier to just use one? Of course not. Each structure has its own advantages depending on what our desired output is. The main advantage is improved efficiency in terms of time gained or lost in operations. Well then, which is best you ask? We’ll just use it all the time and it will be easy/simple. Not so fast. As an example, we’ll search a list, start with the easy ideas and go up from there. There are two basic ideas for searching a list, linear and binary. A linear search is what it sounds like, search each object in order until you find or don’t find what you are looking for. The binary search is a little more sophisticated. It halves the list until it finds, or doesn’t find, what you are looking for. So now I ask you, which is better? The best? Of course, it all depends on how much data you are searching through. If you did a linear search on Amazon for bicycle tires it would take forever, but a binary search would be exponentially quicker. We can say, without getting deep into the math of it, that the amount of time it takes to search is directly relative to the size of the data being searched, agree? Great. The algorithm for searching a data structure is directly relative to the size of the data structure. What does any of this have to do with a movie?



The concepts of algorithmic design and data structures define how we develop a program and how we get as deep as we need too to complete a task. We take the desired output and take steps toward it. As we move from step to step we will implement the different types of algorithms we need to manipulate the data as we need. Progressively moving closer to our output. The structures are normally developed inside of their own files and then compiled together to make a complete program or app. This allows the program to be developed by many different people or teams of people independently while working toward a common goal. The result is a finished product. So, the initial dream that we enter is the first step of designing the data structure and moving one step forward. The next dream is the progressive step and implementation of algorithms to move toward our goal. The next dream is to divide the work into files so we can work together. The climax, final dream and waking up, is to compile the structures and algorithms together into our final, working product.




 


Thursday, October 22, 2020

Object Oriented Programming OOP with a side of Java

 



Okay. So you want to code, and you want to do it with Java? 

I am almost as new to Java as you are so lets see what we can get into. Java is an object oriented programming language. What on Earth does that mean. Well, I found out just a few short hours ago so he's the short version. It is exactly what it sounds like. Objects, like tree or car or dog or cat. The language is based around the concept of giving objects attributes that can be altered and applied to groups of similar objects. This would be the "class" of objects. So for example mammals would be the class and humans, canines, and cats would be objects themselves. Well, all of these are not static figures so they all have some type of action or behavior that is associated with them. These behaviors would be called the "methods" in the Java environment, like talking or barking or scratching. The last big picture item with Java is your inheritance. No, not a large sum of money or a new house, but the traits that you inherited from your family. Let me move a few words around and see if it still makes sense. The methods you inherited from your class of objects. Basically, an object can inherit attributes from a class of objects within the code. There is obviously scores upon scores of data on this online, but hopefully this will get you started thinking inline with Java and OOP. I've included some links here and here to get you moving. 

Now, you'll need to go here and download the Java platform (FYI its called the JDK, Java Development Kit, but no one told me that). It will download, and then install, and then disappear. Don't be alarmed, it did install. After this, not so pleasant experience you'll need a text editor. you are welcome to use the notepad on your device, but I recommend the Visual Studio and the Java Extension. I know this may sound crazy, but download the VS from this link, and install it. Here's the magic, once you start a new file and save it as a .java file extension the program will ask you to install the Java extension. So, you don't have to go find it, it will come to you. Last, once you have your code all typed up pretty and saved, you'll need to run it in either a shell or the command line. I'll leave this part up to you. 



For now, go download some stuff and get reading we've got a long way to go and a short time to get there!

Monday, October 19, 2020

Fundamental Concept of Operating Systems

 

An operating system is simply an interface for the user (whether human being or program) to interact with the hardware of a computer. The major functions of an OS are Storage management, memory management, Protection and Security, and Process Management. Protection and security are responsible for protecting the computer by limiting access to certain files, and security is responsible for guarding the system from attack either internally or externally. Process management is responsible for managing the CPU and managing processes being executed, suspended, synchronized, or communicated with. Memory management keeps track of which parts of memory are in use and who is using it. It also decided who gets to use the memory and allocates as necessary. Storage management is broken into a few nodes. File system management keeps charge of the file system, creating and deleting files and directories to organize files, as well as mapping files into secondary storage. Mass storage management takes charge of free space and storage allocation and disk scheduling. Caching is the mean by which the system uses memory to optimize performance. Basically, these few systems make up the base for a functioning OS.

Underlying the OS are some system services that are pivotal to the operation of the OS. These services provide an environment for programmers and users to access the functionality of the parts of the OS. I have outlined these in the concept map below.



A program is a stagnant state object that is waiting to be executed. Think of it as code that is written, but it is just words. You need a complier to make it a process. Better yet, a cookbook is only paper, but when you work one of the recipes it becomes food. So, cooking is a process and the cookbook is the program.

A process is an active entity and has 5 distinct states: new, ready, running, waiting, and terminated. It was funny to me to spend so much time on each of these states. Why? Because they seem overtly obvious. The naming of the states could not have been more common. It would have been like naming Florida, Sunny, or Nebraska, Flat. New is changing from program to process (getting started), ready getting ready to run by having resources assigned to the process, running is well… running, waiting is waiting for some type of I/O to happen (press yes or no) and terminated is we’re done here.

The process control block (PCB) is responsible for coordinating the moving of a process through these states. It is additionally responsible for coordinating the resources during each state. It controls the state, increments the program counter, manages the CPU registers, CPU scheduling, accounting (funny way of saying all the “numbers” stuff, like time or process numbers), I/O status (info on the physical resources allocated to the process), and memory management. So, it does a lot.

Threads are such a huge part of the modern computer it is strange how quickly they can be defined, but how much time the text spent talking about them. Perhaps I missed a lot of what it was saying but I got the ideas. There are two type of threading (if that’s the right word) single and multi. Ironically, you would think that multi-threading is the best and you’d be right for most computing that requires human interaction. A single-thread process may be ideal in some embedded applications, but for the most part multi-thread is the way we go. Think of them like this, if you need to tell your entire company about an upcoming event and you chose to send an email about it. If you chose single-thread you need to send an individual email to each person in the company. Likewise, if you chose multi-thread you simply select copy everyone to a single email. I really feel like I oversimplified this, so there are parts to each thread; the code, data, files, registers, stack, and the thread itself. Single-thread has one of each, multi-threads share a stack and registers. So, the overhead of the multi-thread is spread out across several resources allowing parallel processing==increased performance and happier users.

Critical-Section Problem sounds like we’ve just gotten into the part of brain surgery that is most tricky and dangerous and “we’ve got a problem”. In fact, it is when two sections of two processes that are critical to the running processes try to pass through the process at the same time. It’s so easy to see, but hard to describe. I guess it would be similar to listening to a book on your phone and your just getting to the climax when your phone rings. The call is one you have been waiting for, but you really don’t want to stop reading at the moment. So, you can either pause the book, or silence the call. But what to do? You’ve reached the critical-section problem. So, when this happens in a process, one solution offered by the text was to assign a start and end marker to each critical section. Thus, when a critical section is processing and another try to start it cannot until the end marker is passed by the processor. This could cause delay but would prevent critical data from being missed and a crash occurring.



Memory management is key to the operation of a computer. Because the operation of a computer relies on movement of processes through different channels of memory a management unit is critical to the operation. Relative speed and protection are the major players in memory management. We must keep the user data, multiple user data, and OS data separate. While we are keeping things safe and separate we need to keep speed and performance in scope as well. This is accomplished through hardware, address binding.

One way to keep things safe is by providing a separate memory space for each process. To keep the space separate two registers (base and limit) are assigned with fixed/legal addresses that define a range. If the instruction address is outside this range it is deemed illegal (does not belong to the process in execution). This results in a trap, or fatal error.  

The other way is through address binding within the memory management unit (MMU). The instruction’s address location is bound to the data associated with it. Most common binding method is execution or run-time binding where the binding cannot occur until run time. When the CPU assigns (binds) it’s addresses to a program this becomes a virtual address. When the MMU assigns the physical location, this becomes the physical address. The set of all virtual addresses for a program is the virtual address space, and the set of all physical addresses corresponding to these virtual addresses is the physical address space (Silberschatz et al., 2014).  Within this space the base register becomes the relocation register and its value is set by each address generated by a user process when it is sent to memory (Silberschatz et al., 2014). So, when the virtual memory location is passed to the MMU the relocation register assigns a physical address that is the combination of virtual and physical addresses within the physical address space and loaded into memory bound to the process data.

Memory management is designed to keep the processes from getting mixed up with one another and the operating system. It uses hardware and address binding to keep things straight within the confines of logical and physical address space. It is also responsible for managing the speed of the instruction movements from storage to memory to CPU cache.




A file system is the life blood of a computer. Without a file system you have a fancy set of wires, silicone, plastic and copper parts. It would accept a charge and that’s about it. The file system is in control of the computer. The file system is responsible for housing all of the files that make the computer function. It is also responsible for managing those files and keeping up with where they are located within storage for easy retrieval. There are 6 basic functions of the file system, creating a file, writing a file, reading a file, repositioning within a file, deleting, and truncating a file. If there were no file system, the computer would not function.

A directory is like the card catalog for the file system. Each file system must have a directory to keep things straight. The directory contains the map of where each file is in the file system. It uses the files unique name, identifier, and path to maintain order with in the file system. There are many structures a directory can use to maintain the file system. The text indicated the following 5 generic structures.

Single-Level Directory
This structure is fairly self-explanatory. It is a single directory with all files in one directory. It is flat with everything “right there”. While simple it has a single glaring problem. With all files in one directory, as the directory grows the file system gets increasingly more difficult to manage. Keeping track of all those file names is increasing challenging. Because of this limitation a multiple user system is virtually impossible.

Two-Level Directory
This structure is similar to the single level but has a new higher-level directory for each user. A Master File Directory is accessed when a user logs on. Their specific user level directory is then accessed. So, provided the files are saved for a specific user, the files are kept separate with separate directories. Again, this sounds good and is good for most personal computers, but it lacks a good way to share files across users. If a file is saved in one user directory, it cannot be accessed by another user. There is no sharing with a two-level directory. 

Tree-Structure Directory
This structure is sort of a two-level within a two-level and so on. There is a main directory that contains the user specific directories. The user specific directories contain subdirectories to keep all of the many files in order and maintain efficiency and performance. There aren’t really drawbacks that I would consider significant, but there are a couple of note. Files in the structure can be shared, but there must be specific permission granted between the user directories. And the path name must be specified to change to the correct directory and access the requested file. This is the most common of all directory structures.

Acyclic Graph Directory
This file structure is a little confusing. It is a tree structure with the ability to share subdirectories and files. This is not sharing two copies of the same file this is a living document between two users. Where two users are working in the same file or directory and see the changes in real-time. The challenge lies in how do we all work with the file if it only exists in one place? Of course, you could copy it to your directory, but then you are not sharing. So, enter the concept of a link. We all know what it is, it is simply a pointer to the location within the shared directory for the file we want. But, how does the system handle a shared file it receives a request to delete the file? Are there any instances of the file still in use? Do you really want to delete the file? By using a link for the file, it can monitor (a simplified term) the link to see if anyone is using the file, and thereby continue with the delete or deny it.

General Graph Directory
With the acyclic structure the system tries to avoid cycles within the directory. Meaning it attempts to search the directory without having to cycle back over areas that were already searched. In the general graph structure cycles are not the first concern. This structure is basically what happens when you add links to a basic tree structure. With a tree structure you can share files and directories, but they are direct access to the files/directories, they are not done by link. When the link is added you get a graph structure where multiple directories and multiple files can be shared via links and direct path location. The drawback to this is the cycle. If we search in an acyclic graph the search does not scan the same sections twice, avoiding the cycle. With a general graph the same sections may be scanned more than once, resulting in performance issues and possible infinite loops when the counter is incremented with each cycle. To compensate a cyclic graph will institute “garbage collection” marking each file as it scans and then cleaning up behind itself as it goes, prevent a rescan of the same section. The is expensive and time consuming, so it is not very popular.

 

I/O devices vary greatly. They can an external device such as mouse, keyboard, printer, or monitor. They can also be internal devices like a network (PCI) card, or GPU. These are connected to memory via buses and communicate with the CPU through driver software and controllers. There are controllers at each end of the communication, host controller internal and device controller external. The I/O device controller sends it’s request to the host controller. The host controller moves the request to memory where the CPU can translate and respond to memory. Then the host controller returns the action to the device controller. Its then translated and physical action takes place.



Security and protection are ever growing aspects of modern computers. As hardware and software grow in complexity so do the threat of malicious attack. "Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. Security, is a measure of confidence that the integrity of a system and its data will be preserved." (Silberschatz et al., 2014) There are two main principles of protection, the principle of least privilege and the need-to-know principle, that help to define the process of protection. The principle of least privilege is the idea that a program or user is given “just enough privileges to perform their tasks,” (Silberschatz et al., 2014). This principal is funny because it sounds like we are giving them just enough rope to hang themselves. The need-to-know principle, centers around the concept that having enough access is great, but we need to give just enough knowledge to complete the requested task and nothing more (Silberschatz et al., 2014). From these two principals we derived the language or domain based protections.

Language based is protection written into the code of a user program. It is less secure than kernel-based protection. It relies on the accuracy of the programmer and the accuracy of the complier and translator. And it is more prone to malfunction than a hardware-based protection. It is more flexible in its approach though. Language-based protection can be easily altered to make necessary changes after it is programmed. Hardware-based protection is less capable of being adjusted if there are flaws in the protection model.

Domain based protection is centered around a domain containing a set of rules for the processes attempting to operate within it. When a process requests operation it is assigned to the domain with the permissions required to carry out the operation. If a program needs access to a specific file to operate, it will need to be assigned to the domain with permission to that file. If the program does not have the key for that domain, it will not be granted access to the file. Domain based protection is flexible in that if a particular access is needed the domain can swap with another to provide the access. It does however, suffer from a difficulty of maintaining and revoking permission to specific locations if multiple programs access the location.

The access matrix is a grid system, file table that indicates permissions based on domains. Each file is assigned to a domain and then the permission is assigned. Here’s the example provided in the text, (Silberschatz, Galvin, & Gagne, 2014). This is at its most simple form. Note that each domain can also be described as an object within the access matrix to allow for domain switching, a technique to allow different permission to objects as they are needed.


          Security is insured by the protection method that we choose. There are 4 high-level classes of security that need to be addressed; physical environment, user/human, network, and operating system. Each of these poses its own vulnerabilities and will need some form of protection. Physical environment indicates that the location of the computer needs to be protected from unauthorized access. The user or human needs to have correct or granted authorization to access the device and its contents. Network security deals with protecting connected devices from outside threats via hardware and software protective programming. Lastly, the operating system needs to be protected as described above with either hardware/domain based or language based security. There are many ways a computer can be attacked and each of them must be addressed to maintain security.






This has been a great study of the fundamentals of an operating system. It was very helpful to

learn how the software talks to the hardware. Part of this course has taught me that I may 

want to consider a career path in OS development or engineering architecture. 

 

References

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials (2nded.). Retrieved from https://redshelf.com/











Sunday, July 5, 2020

Network Security – Safety Never Takes A Holiday


When someone says Network Security the first thing that comes to mind is Paul Blart Mall Cop (Sony Pictures 2009). The connotation is simple, security is a big thing that no one can see that supposedly protects your network. The network is the 400-pound gorilla in the room that everyone uses but most don’t know how it works, “just call IT (Paul Blart) he can fix it”. The two parts, network and security must work in harmony for it to be a seamless protective service. Paul Blart did not care what people thought of him, just did his job to the best of his ability.

 

At my company, probably similar to yours, we have a team of network security employees that we trust to keep everything safe. But what are they protecting? There are hardware and software components that are subject to attack from threats at any given moment. The software or hardware that every user is using can be compromised through email (spam or phishing), or side-by-side malicious downloads that come from seemingly harmless sources. It is the job of the security team to detect and mitigate these from accessing the company’s network and thus multiple machines and ultimately the entire company.

 

On the downside, if an attack is successful the attacker could easily seize control and demand anything from monetary ransom or other concession to “release” their hold and restore the soft/hardware to the company. In other cases, the attacker may not want anything more that to cause harm in which case a malware virus would simply “seek-and-destroy” company property on their network and thus cripple the company with no hope of recovery. Both of these are truly devastating to businesses and are very easily avoidable.

 

As computers have evolved from the large mainframes in their living room sized boxes have shrunk and developed into the tiny little phone laying next to this laptop the knowledge of the technology has migrated as well. Once it was only the large corporations and government that could afford computers. Now everyone has a computer, and in many homes probably several “devices”. As the technology moved to the public domain the “tech assassins” grew in numbers as well. The knowledge of technology was no longer only available to those with money but was now available to all via the internet. I looked just today as several viable trusted sites on the web to see how to unlock an iPhone and try to bypass the security. It’s actually quite simple, you just have to know where to look.

 

So, the need for a robust network security force exists for pretty much everyone. Now the odds of a hacker targeting you as an individual may be pretty slim, but attacks on corporations and the government are almost endless. This brings me to my big question, why is it so hard for us to make a security system that is unbreakable? It seems that if we could just stop changing the software on the user machines security would be basically impenetrable. This sounds absurd coming out of my mouth, but it is almost exactly what my company did for almost ten years.

 

Obviously if you stop doing upgrades and keeping up with the technology you will ultimately fall far behind, but who are you chasing? The competition, your friends, the “Jones’s”? That’s a question that I can’t answer but can tell you that technological advances will not stop and in fact increasing daily.

 

My company, I will not name them, had an archaic view of technology. The business that we are in is almost exclusively computer-based and is definitely a “tech field”. We are the forefront of our type of technology and are pioneering the tech that we produce. Which is freakishly frightening because in the front-of-the-house (customer service and basic computing) we just 4 years ago made the leap from Windows XP to Window 7 and last year from 7 to 10. The belief from our executive team was that we were insignificant and didn’t need the newer technology. It took two attacks, one was email spam and the other phishing, to get them to listen and understand. Spoiler alert, it also answered my big question.

 

Because of evolving technology in network architecture and the infinite resources on the internet it was quite easy for a hacker to get into our old outdated, unprotected email system. The OS developers search out exploits in the current software and patch it. So, when they send an update for their OS it will include the patch for the exploit. Since we never updated our OS (and Microsoft was no longer supporting it) it opened the door for attacks. Once they had access to our email, our network was wide open to them and they did what they wanted. Ironically the first was using our own email system to direct a DoS attack on our website. The second, phishing attack, installed ransomware and shut us down for a few days and was very costly to remove. I was never more thankful for backups than that day.

 

So, obviously we will never be protected from security threats by just doing nothing. The threat is always there and waiting for someone to find it. Since technology is now everywhere the need for network security has increased dramatically. The need exists both for digital control and situational awareness (Li, Y. et al, 2019). The digital control is for the virus detection and antimalware software to protect the software and hardware on the network. This is normally a real-time protection that is running on the system and relies on little to no input from the network security team. The situational awareness is for the users on the network and their education and awareness of what a threat looks like. The wideset gap in any security is not the hardware or software but the humans using them. The vast majority of attacks come through exploiting a person’s trust and getting inside the easy way (Gulshan, K., 2014). It is crucial for companies and individuals to education themselves and their employees on the types of threats and how to avoid them. Also, to empower them to report a breech when it happens to try and minimize the damage of a successful intrusion. Education and awareness are possibly the most important part of any network security.

 

“Safety Never Takes A Holiday!” – Paul Blart


 

 

 

 

 

 

 

References

 

Gulshan Kumar & Krishan Kumar (2014) Network security – an updated perspective, Systems Science & Control Engineering, 2:1, 325-334, DOI: 10.1080/21642583.2014.895969

 

Li, Y., Huang, G., Wang, C. et al. Analysis framework of network security situational awareness and comparison of implementation methods. J Wireless Com Network 2019205 (2019). https://doi.org/10.1186/s13638-019-1506-1

 

Vahid, F. & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/

 

 

 


Thursday, July 2, 2020

the keys to your kingdom: network security

Information and information systems are critical for day-to-day life of businesses and individuals. So, it follows that security of the hardware (system) and software (information) is also a critical component for day-to-day life. According to Frank Vahid and Susan Lysecky in Computing Technology for All, security is not only a function of the hardware and software, but a human component as well. While it is easiest for someone to target exploits in an OS it is almost as easy to target a user. Human curiosity will always be one of the easiest targets to exploit.


There are numerous ways that a business or individual can be attacked. The method for attack could come as a virus in a download, a spam or phish in an email, or malware, a general term similar to a virus, along side a download. Spam and phishing both normally exploit a user while a virus, malware, don’t need action once they are downloaded. We’ll explore phishing and spam a bit more below. All of these attacks are meant to control either software or hardware with intent to harm. The harm may be monetary, personal defamation, or business reputation. Any attack that is successful will almost always result in some kind of loss to the individual or business.

One particularly annoying and effective attack is a DoS or Denial of Service attack. This type of attack uses either a virus or email spam to attack a specific target. When the attack is ready to commence the virus or malware will send continuous ping requests to the target. The ping requests will overwhelm the target’s ability to accept new, legitimate, requests and return a “busy signal” or a denial or service/timeout. Now, the idea of a ping test is a single ping that returns information about the speed of the computer and network between locations. The continuous ping in a DoS attack is thousands of ping requests per second from a multitude of locations and all of them pinging a single location. If the attack is large enough it can cause a disruption of service across the location of the attack and the surrounding network. It is possible for a ping attack to take down an entire portion of the internet around a specific server.

The CAN-SPAM Act has defined spam as an unsolicited commercial email that the recipient has no affiliation with and was sent without consent of the recipient. Additionally, emails are considered if they were sent in bulk without the recipient’s consent. While not all bulk emails are considered spam, a vast majority is just that. Research from the Journal of Cyber Criminology indicates that 90% of the emails sent are possibly spam, depending on the definition adopted for spam. Email spam is primarily used for revenue generation or promoting products, however, there are also used for stealing information and phishing (hang in there we will get to it). When a spam attack is used properly it can infect an entire organizations network. With control of the organizations network and email it can launch attacks such as DoS attacks without disclosing the identity of the hacker inciting the attack.

Phishing according to Taking the Bait combines social engineering and complex attack vectors to create an illusion or deception in the eyes of the email recipient that the legitimacy of what is being offered or asked is not only truthful, but persuasive enough to prompt an action by the recipient in some form (Lacey et al.). Particularly phishing involves getting the recipient to open an email and/or click to another site and enter their personal information. Once they have either opened the message or entered their information the attacker has what they needed, access. If the email message is opened within a business network the phishing hacker can install ransomware or another type of virus to seize control and demand a monetary compensation for releasing the businesses information. To an individual the phishing scam may involve a person believing they are about to visit a trusted site and enter their personal information. If this a bank account, the thief now has your keys to the kingdom.


To protect against both of these types of attacks there are two primary defenses. The first of these is education. In instances of spam and phishing the attacker must gain access through a user accepting or opening the email sent in hostility. Educating the recipients on what warning signs exist and what to do with the attacker email is the best line of defense we have. Secondly, there are numerous security programs out there, like SolarWinds MSP, Spam Titan, and Mailwasher, that are used to filter emails, search for specific verbiage, and compare recipients on the users white/blacklists. While these types of security software are robust and can be very helpful they are not perfect. Because these types of software are available to the hacker just as readily as the user it is very difficult for the manufacturer to stay ahead. So, it is ultimately the end user’s responsibility to keep him/herself safe from attacks. Education is the best line of defense.

 


 

                                                                   References

 

Lacey, David, et al. “Taking the Bait: A Systems Analysis of Phishing Attacks.” Procedia Manufacturing, vol. 3, 2015, pp. 1109–1116, 10.1016/j.promfg.2015.07.185. Accessed 26 June 2019.

Vahid, F., & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/

Yu, S. (2011). a under a creative commons Attribution-Noncommercial-Share Alike 2.5 India License 715 Email spam and the CAN-SPAM Act: A qualitative analysis. International Journal of Cyber Criminology5(1), 715–735. https://www.cybercrimejournal.com/Yu2011ijcc.pdf

 

 

 


computers in the workplace, or up in the air?

Aviation. When you hear the word aviation you probably think of airplanes and pilots. Of course, they are the two primary components to all things aviation. However, there is so much more that goes into getting you from point A to point B.

Throughout the United States and all over the world there are aviation professionals working diligently, on computers, to keep you safe and get you there in the shortest time possible. Remember the safest place to be while traveling is on the ground. In the aviation field computers have for many years basically run everything, and now they are taking over the cockpit as well. 

As GPS technology unfolds and gets more and more precise, aircraft and aviation as a whole are relying on it for everyday operation. You may believe that the pilot up front is watching out the window for other airplanes and obstacles. You may think that they are responsible for keeping the airplane going in the direction you want to go, and even think that the pilot is “flying” the plane at takeoff and landing. If this is your thinking you are mostly wrong. In 2020 the airplanes can, and most times do fly themselves, the pilot is there to monitor and take over in the event of a malfunction. Of course, there are times when the pilots do fly the plane but most of the time they cannot do it without help from the plane itself. The inclusion of computers and new technologies in modern aircraft have helped tremendously and will continue to make powered air flight safer and more efficient.

 

Along with the emerging technology in the air, the same can be said on the ground as well. Just like the GPS and navigation aids in the airplane the air traffic control system is getting overhauled also. The instruments used by ATC are now more accurate than ever which allows the controllers to keep more aircraft closer together or more in line up in the sky. These enhancements are keeping you safe and helping you get to point B without incident.

 

So, as you can imagine the pilots, controllers, instructors, and maintenance technicians are working as quickly as they can to keep up. At my place of employment that is exactly what we do, teach pilot and aviation professionals the current and upcoming technologies. It is of the utmost importance for them to keep up as they are the people that help keep us moving and the globe shrinking. Keep these in mind the next time you fly and tell your Captain thank you when he slides on a nice smooth landing. See if he will give the computer on the airplane credit for it’s help. 

 

Wednesday, July 1, 2020

traveling through the network

Packets of data are sent through the networks bouncing from one router to the next until they reach their assigned destination, or IP address. As they travel from point-to-point, or hop, they request the next destination and then travel there. If they receive no reply or not in a specified amount of time they, time out. Once they reach their destination they complete their task. Then a response, or echo, is sent back to the origin via the same route in response. Thus communication happens.

The ping test to google was as expected very quick while the tracert took considerably longer, but the final destination reply for the tracert aligned closely with the ping test. So, I know that I have a stable quick route to google so the communication will be complete and speedy. The test to netregistry.com.au understandable took much longer to complete with higher times and lots of lost packets, but it was traveling to the opposite corner of the world, so that is to be expected. Interestingly though was that the ping test and the tracert came back with an almost identical time for the last hop to the destination. I got a very similar result from the mail.ru tests, and in fact the max ping and the first packet in the tracert arrived in exactly the same time.  

 

 

Pinged google.com with a total of 4 packets each with the size of 32 bits. The average time was 18ms, max 20ms, min 17ms. There were no packets lost on the trip.


 

Pinged netregistry.com.au with a total of 4 packets each with the size of 32 bits. Average time was 387ms, max 515ms, min 281ms. There was 1 packet lost on the trip.


Pinged mail.ru with a total of 4 packets each with the size of 32 bits, the average time was 280ms, max 332ms and a min 196ms. There was 1 packet lost on the trip.

 

 


 

Tracert for google.com took 11 hops. Times ranging from 6ms to 61ms. There were 2 timeouts but no failures due to timeouts.


Tracert for netregistry.com.au took 15 hops, from 7ms to 607ms. There were 13 time outs and 4 timeout failures.

 

Tracert for mail.ru took 11 hops. The time between hops was from 2ms to 397ms. There were 11 timeouts and 3 timeout failures.

 


This assignment is particular challenging for me as the ping and tracert have always been abstract ideas to me. So working through them was quite insightful.

I actually used the ping test to determine why my brand new very expensive router wasn’t performing to my expectation. I used another computer in the house and one of my family members nearby. The ping was okay with the first test and then was extremely long with many lost packages on the second. I tried updating the drivers on the router, modem, and computers to no avail. So I tried changing the ethernet cable and re-pinged both machines. Voila! Problem solved. I had a CAT 5 cable from my modem to my router that was choking down the signal and was not allowing the communication to flow as was expected. The ping proved it was the cord.

In another instance, and anyone in the Northeast US can use this test. I used a continuous ping test on a few Sunday afternoons to test my ISP. I will not name them as we may have classmates that work for them. On Sunday afternoons around 3PM we experience a dramatic drop in connectivity, and I have always believed that it was the ISP either throttling or intentionally dropping service in my area. So, I used a continuous ping around the time that I thought this would happen. On 3 of the 4 Sundays that I tested the ping was either extremely long or timed out for several minutes at a time. The time outs occurred between my router and the ISP, so I knew it was them. I sent them the data, but as I am only the customer, it made no difference to them, and I still suffer through Sunday afternoons.

 


Well, dreams, they feel real while we're in them right?

  Since we are both relatively new to the concepts of programming computers lets be basic. I am not going to assume that you have advanced k...