1. Have you ever looked at the source code of the libraries/frameworks you use?
A. Monolog, Dispatcher, League\Csv\Writer
2. How do you organize your code?
3. Describe two good uses – and practices – for callback usage.
4. HMVC Pros and Cons – HMVC can do anything that MVC can and more…one of the things that you will or had struggled with in MVC is that you can’t call a controller from another controller (unless you use a custom library or helper to do that)
5. MVC Pros and Cons
Model – The model layer is responsible for the business logic of the application. It manages the application state. This also includes reading and writing data, persisting application state, and it may even include tasks related to data management, such as networking and data validation.
View – The view layer has two important tasks: presenting data to the user handling user interaction A core principle of the MVC pattern is the view layer’s ignorance with respect to the model layer. Views are dumb objects. They only know how to present data to the user. They don’t know or understand what they are presenting.
Controller – The view and model layers are glued together by one or more controllers. In iOS applications, that glue is a view controller, an instance of the UIViewController class or a subclass thereof.
A controller knows about the view layer as well as the model layer. This often results in tight coupling, making controllers the least reusable components of an application based on the Model-View-Controller pattern. The view and model layers don’t know about the controller. The controller owns the views and the models it interacts with.
Separation of Concerns – The advantage of the Model-View-Controller pattern is a clear separation of concerns. Each layer of the MVC pattern is responsible for a clearly defined aspect of the application. In most applications, there is no confusion about what belongs in the view and model layer. What goes into controllers is often less clear. The result is that controllers are frequently used for everything that doesn’t clearly belong in the view or model layer.
Reusability – Whereas controllers are often not reusable, view and model objects are easy to reuse. If the MVC pattern is correctly implemented, the view layer and model layers should be composed of reusable components. Problems If you’ve spent any amount of time reading books or tutorials about iOS or OS X development, then you’ve probably come across people complaining about the MVC pattern. Why is that? What is wrong with the MVC pattern? A clear separation of concerns is great. It makes your life as a developer easier. Projects are easier to architect and structure. But that is only part of the story. A lot of the code you write doesn’t belong in the view or model layer. No problem. Dump it in the controller. Problem solved. Right? Not really. Data formatting is a common task. Imagine that you are developing an invoicing application. Each invoice has a creation date. Depending on the locale of the user, the date of an invoice needs to be formatted differently. The creation date of an invoice is stored in the model layer and the view displays the formatted date. That is obvious. But who is responsible for formatting the date? The model? Maybe. The view? Remember that the view shouldn’t need to understand what it is presenting to the user. But why should the model be responsible for a task that is related to the user interface? Wait a minute. What about our good old controller? Sure. Dump it in the controller. After thousands of lines of code, you end up with a bunch of overweight controllers, ready to burst and impossible to test.
Isn’t Model-View-Controller the best thing ever?
Controller is tightly coupled to model and view hence can’t be reused. We tend to write fat controller or fat model. What can be done here is to have an extra layer. That’s what MVVM is about. Like the model gives raw data and the extra layer(call is view-model or processor or whatever) is responsible for formatting the data.
( This is where we can add a processor layer to do data formatting for our project. So model deals with data source and processor layer deals with the data formatting and controller is kept clean as well).
6. Three-Way Handshake
Definition – What does Three-Way Handshake mean?
A three-way handshake is a method used in a TCP/IP network to create a connection between a local host/client and server. It is a three-step method that requires both the client and server to exchange SYN and ACK (acknowledgment) packets before actual data communication begins. A three-way handshake is also known as a TCP handshake.
Techopedia explains Three-Way Handshake A three-way handshake is primarily used to create a TCP socket connection. It works when:
A client node sends a SYN data packet over an IP network to a server on the same or an external network. The objective of this packet is to ask/infer if the server is open for new connections.
The target server must have open ports that can accept and initiate new connections. When the server receives the SYN packet from the client node, it responds and returns a confirmation receipt – the ACK packet or SYN/ACK packet.
The client node receives the SYN/ACK from the server and responds with an ACK packet. Upon completion of this process, the connection is created and the host and server can communicate.
Any HTTP method (like POST) requires a TCP connection, and the only way to initiate a TCP connection is to use the three way handshake.
If you open a connection to a web server and download a webpage using the GET method, and that webserver supports connection keepalive. Subsequent requests to that web server, including the POST method, might simply re-use the already existing TCP connection. Therefore, that particular POST would not require a new 3-way handshake, since the data would be transferred in an already existing TCP connection.
Connection Keepalive, however, does not have an infinite duration. So if after downloading the webpage, you waited a while before sending your POST, the original TCP connection may have already closed, and in this case, your browser would have to open a new TCP connection to POST your data, which would obviously require starting with the 3-way handshake.
Keep-alives were added to HTTP to basically reduce the significant overhead of rapidly creating and closing socket connections for each new request. The following is a summary of how it works within HTTP 1.0 and 1.1:
HTTP 1.0 The HTTP 1.0 specification does not really delve into how Keep-Alive should work. Basically, browsers that support Keep-Alive appends an additional header to the request as:
Connection: Keep-Alive When the server processes the request and generates a response, it also adds a header to the response: Connection: Keep-Alive When this is done, the socket connection is not closed as before, but kept open after sending the response. When the client sends another request, it reuses the same connection. The connection will continue to be reused until either the client or the server decides that the conversation is over, and one of them drops the connection.
In HTTP 0.9 and 1.0, by default the server closes its end of a TCP connection after sending a response to a client. The client must close its end of the TCP connection after receiving the response. In HTTP 1.0 (but not in 0.9), a client can explicitly ask the server not to close its end of the connection by including a Connection: keep-alive header in the request. If the server agrees, it includes a Connection: keep-alive header in the response, and does not close its end of the the connection. The client may then re-use the same TCP connection to send its next request.
In HTTP 1.1, keep-alive is the default behavior, unless the client explicitly asks the server to close the connection by including a Connection: close header in its request, or the server decides to includes a Connection: close header in its response.
Can three way handshake be possible in UDP? No. Tcp protocol was made for reliability, reliability of sending and receiving packets. If one packet gets dropped in between the reciever can ask for the dropped packet but in case of Udp if the packet gets dropped it is lost and the reciever goes on recieving the next packets so there is no ack in udp and thus no three-way-handshake.
7. Thread Pool
In web applications thread pool size determines the number of concurrent requests that can be handled at any given time. If a web application gets more requests than thread pool size, excess requests are either queued or rejected.
Please note concurrent is not same as parallel. Concurrent requests are number of requests being processed while only few of them could be running on CPUs at any point of time. Parallel requests are number of requests being processed while all of them are running on CPUs at any point of time.
In Non-blocking IO applications such as NodeJS, a single thread (process) can handles multiple requests concurrently. In multi-core CPUs boxes, parallel requests can be handled by increasing number of threads or processes.
In blocking IO applications such as Java SpringMVC, a single thread can handle only one request concurrently. To handle more than one request concurrently we have to increase the number of threads.
8. HTTP vs HTTPS HTTP is Hyper Text Transfer protocol that is used in networking. Whenever basically you type a website in the browser, its is this protocol which by default listens at port 80 on server side and helps you to see the webpage on your machine.
HTTP is a applicaiton layer protocol and uses TCP as a underlying protocol for communicating and making connections. As I said it uses default port 80 for communication.
HTTPS stands for Hypertext Transfer Protocol over Secure Socket Layer or HTTP over SSL. In this SSL acts as a sub layer under regular HTTP application layering. HTTPS encrypts an HTTP message prior to transmission and decrypts a message upon arrival. By default, HTTPS uses 443 port, whereas HTTP use port of 80. URL’s beginning with HTTPS indicate that the connection between client and browser is encrypted using SSL.
SSL transactions are negotiated by means of a key based encryption algorithm between the client and the server, this key is usually either 40 or 128 bits in strength, though higher number of bits indicates more secured transaction.
HTTPS or SSL connections are necessary if you have any online store or you do any financial transactions using credit card or online banking or ask for any other sensitive information.
Some of the advantage of HTTPS are like it offers Privacy, Integrity & Authentication which are missing in HTTP based connection.
Can also talk about HSTS here.
9. Common Server Response Codes
Question: Describe server response code 200. Answer: (“OK”) Everything went ok. The entity-body, if any, is a representation of some resource.
Question: Describe server response code 201. Answer: (“Created”) A new resource was created at the client’s request. The location header should contain a URI to the new resource and the entity-body should contain a representation of the newly created resource.
Question: Describe server response code 204. Answer: (“No Content”) The server declined to send back any status message or representation
Question: Describe server response code 301. Answer: (“Moved Permanently”) Client triggered an action on the server that caused the URI of a resource to change.
Question: Describe server response code 400. Answer: (“Bad Request”) A problem occurred on the client side. The entity-body, if any, is a error message.
Question: Describe server response code 401. Answer: (“Unauthorized”) The client failed to provide proper authentication for the requested resource.
Question: Describe server response code 404. Answer: (“Not Found”) Client requested a URI that doesn’t map to any resource.
Question: Describe server response code 409. Answer: (“Conflict”) Client attempted to put the servers resource into a impossible or inconsistent state.
Question: Describe server response code 500 Answer: (“Internal Server Error”) A problem occurred on the server side. The entity-body, if any, is a error message.
10. Advantages and Disadvantages of CDN
Cached resources – There are some possibilities that resources website is using may be cached on user browser.
Multiple requests – Ideally browsers allow 4 active request to domain for content. Other request need to be queued till any of the slot becomes free. CDN gives extra edge here by offering multiple multiple domain for the content delivery. Meaning your website can make multiple request for resources to different domain at a time. This reduces the queue time and loads content faster.
Distinct Geo located data centers – When you host your static entities to CDN hosting. They host that content to various Geo located data centers for faster delivery. For instance your website is hosted in Australia and a user comes from USA. Ideally the response needs to pass n number of network hopes to reach to user. But CDN detects the nearest located data center and respond. It reduces the network hopes and faster the response.
Optimized Infrastructure – Generally good CDN provider offers highest up-time/availability, builtin fail-over and lower data packet loss. Web hosting infrastructure would be good but it would be match the scalability, capacity and fail-over support of CDN.
Failure – This may be a rare case but if CDN data-center or services are down. You can’t do anything but wait till gets up. Or you can write failover logic that look for local server based resources instead at coast of additional development.
Security – Security would be concern if you use public CDN services. Especially remote hosted Java-Scripts can be altered to collect/store user’s and system data information.
Restriction – Some countries may ban the IP address of CDN provider. Risk is that your website may not behave or look as it should ideally on user’s machine. Though you can control it via fail-over technique to lookup for local resource, but that would loose the CDN advantage.
Content Optimization – Use CDN would not offer content optimization. Meaning CDN hosted libraries are good for general development. It will load entire library no matter if you require specific set of feature from that resource. In this case you need to alter the resource create the file and host it to your own server.
11. Difference between proxy server and reverse proxy server SO
A forward proxy is a proxy configured to handle requests for a group of clients under the local Administrators control to an unknown or arbitrary group of resources that are outside of their control. Usually the word “forward” is dropped and it is referred to simply as a proxy, this is the case in Microsoft’s topology. A good example is a web proxy appliance which accepts web traffic requests from client machines in the local network and proxies them to servers on the internet. The purpose of a forward proxy is to manage traffic to the client systems
A reverse proxy is a proxy configured to handle requests from a group of remote or arbitrary clients to a group of known resources under the control of the local Administrator. An example of this is a load balancer (a.k.a. application delivery controller) that provides application high availability and optimization to workloads like as Microsoft Skype, Exchange and SharePoint. The purpose of a reverse proxy is to manage the server systems.
Squid is a forward proxy and Varnish is a reverse proxy.