iOS

Interview Question Practice Series: Pt. 2

Continuing right where we left off:

Question 6

(FB front-end engineering question) Using HTML & CSS, create an image that displays another image when the user hovers over the image. Align the image to the bottom right of the screen. 
You can see the code by clicking here.
  1. This one is very straightforward if you've used a bit of CSS here and there. You need two HTML img elements and need to assign classes to them.
  2. The rest of this just tests your CSS knowledge, mainly the "display" property. You can just set up the display of your first image to be "block" - you can opt to set up other properties to make it look nice, setting a relative position, etc.
  3. After that, you set the bottom and right properties of your second image to be "0" so that it aligns to the bottom right of the screen. Display should be set to "none" so that it's hidden until a user interaction forces the view.
  4. Lastly, we look at the "hover" pseudoclass (simply defines a special state of an element). We specify that we are targeting the hover for the first image and use teh "+" operator to say that we want to overlay the second image.

Question 7

(FB iOS engineering question)This one is the most challenging out of what we've covered so far, in my opinion (mainly because it involves domain framework knowledge and not general programming knowledge). iOS developers have the option to create a dispatch queue that executes an action after a delay (dispatch_after(when, queue, block).It isn't easy to cancel this dispatch queue once it's set up - implement a cancellable_dispatch_after method to allow for easy cancellation of the block.
  1. The first thing to note is what parameters there are in a dispatch_after method. The "when" is a dispatch_time object, the queue is just the queue on which to submit the block, and the block is just that!
  2. If we're delaying a UI action based on a user interaction (pressing a button, perhaps), the queue we're executing on is obviously the main queue. Make sure you're aware of that in case you're ever asked in an interview.
  3. Let's think of a use case for cancelling a block that is being executed after a delay. Say you have an animation that needs to be cancelled if the user touches the screen in the middle of the animation. We could have a boolean value that determines whether or not the screen has been tapped (we can set it to false as a class variable and then in a gesture delegate method we could set it to true). In our cancellable block, if the boolean is true, it means the user has tapped the screen and we can cancel the animation block.

Architecture - MVC and MVVM

MVC - the typical architectural pattern for many web languages, Objective-C and Swift included. Separation of concerns exist nicely in this pattern - the most immediate problem we have is that Cocoa Touch has bundled our V and C together into a ViewController. Thus, we have view logic and "glue" logic all in one place. Any idea what this may lead to...? That's right! We end up creating a different type of MVC - a massive view controller!

Our views and controllers end up very tightly coupled, and a ton of logic is thrown into our VCs because we can't think of any other good place to put it. MVVM - Model, View, ViewModel, attempts to alleviate this by having a ViewModel act as the new "controller". This lets the ViewController class created by Apple to be JUST a view - no controller logic there. The controller is then truly factored out and acts as a link between the view (which is represented by a ViewController subclass) and our model. This architectural pattern relies on data binding between the view and the ViewModel - that is, when a change is triggered in the ViewModel (based on a change in the model), the changes should cascade down to the view through data binding - KVO, anyone?

This isn't really all that different from TRUE mvc - this is a problem, as far as I'm concerned. All we've done is create a new repository to dump things that we don't know where else to put - instead of putting this logic in a VC alongside view code, we lump it into the viewModel. Yeah, our ViewController classes have less code now and are more focused and testable, but instead of having bloated ViewControllers, we have bloated ViewModels. THIS IS NOT A COMPELLING REASON TO CHANGE ARCHITECTURE PATTERNS - the two good things I can say about MVVM are that 1. You own your ViewModel, whereas you were previously putting a lot of logic into a ViewController (which is created by Apple and not you) and 2. ViewControllers are more testable with this approach. 

What, then, can we do to truly improve our architecture? For one, create as many classes as you need to make sure that each file has single (or minimal) responsibility. Don't have a class called "BicycleVC" - instead, break that down into "BicyclePartsVC", "BicycleMaintenanceVC", etc. You don't even have to break it down into ViewControllers! Your files can just be regular objects that are used within another file, a controller somewhere else. By breaking down your files and naming them appropriately, all the while obsessively making sure that no class has too much responsibility, you greatly improve your architecture. No single class, not even a controller, will be saddled with too much responsibility and thus won't be saddled with too much code. 

ANY OTHER DEVELOPER SHOULD BE ABLE TO LOOK AT YOUR CLASS NAMES AND, WITHOUT LOOKING AT THE SOURCE CODE, DETERMINE WHAT EACH CLASS DOES AND HOW THE "PIECES FIT TOGETHER". 

Lots of data source code in a TableViewController subclass you're using? Factor it out. Lot's of networking code that encompasses multiple services being lobbed into your VC? Factor it out into several smaller classes, and, if it suits you, use a facade class to allow access to those (private, hopefully) internal APIs. When you think about it, a ViewModel is essentially a facade class used to access the model that is powering your software. Focus less on popular patterns and more on modularizing your classes, forcing minimal responsibility onto each class, and compulsively giving your files detailed names. 

Data Structures in Swift!

I recently added a README file to my GitHub repo to add a bit of clarity to anybody checking out the code. It includes sort and search algorithms and some basic data structures. I've covered linked lists previously on this blog, so the next few posts will be on the various algorithms I've implemented in Swift.

To finish up the series, I'll be doing a (possibly multi-part) post on Binary Search Trees - they're easily my favorite way to review recursion (or learn about it for the first time, if you're new to programming). I've written a BST in two ways - with a "node" data structure that is used to create a tree and without a node structure. The latter version uses itself as structure, meaning that each node is treated as an independent subtree. The logic is extremely different between the two, and although I would personally recommend writing a BST without a node structure, it's still good practice to review.

Swift Algorithms & Data Structures

The various files make use of Swifty features, all of which can be reviewed by looking at the README file on the repo. This includes preconditions, subscripting, extensions, sequencetypes and computed properties. Enjoy!

Swift Generics

Generic programming refers to the ability to write functions and data types that don’t specify a type. In this way, an algorithm or, say, a collection data type can have high reusability and specify a type when instantiated. For example, an array can hold many kinds of objects - it’s a generic data structure. When you want an array that specially holds objects of type Struct Gen, you can declare it (specifically in swift) as:

var genericArray = [Gen]()

You could also declare an empty array of integers as:

var intArray = [Int]() or, using explicit generic syntax, as var intArray = Array<Int>()

In this way, it’s clear that there is one array type, but we can specify what types the array can hold. You can almost treat a data type as a function, with a type being passed as an argument.

Arrays, Dictionaries, and Optionals are all generic types in swift and, as such, they can only hold one type - they are type-safe data structures. There are workarounds for this - say you want an array that holds both instances of Struct A and Struct B. You can declare the generic array to hold type <Any>, which will allow you to insert objects of different types in the same array. 

*Note : when using <Any>, be aware that the compiler won’t check whether a method exists or not when called on an object of type Any. All returns of an <Any> instance would be an optional value - a safe way to access an object of type Any would be to type cast it. 

You can also have different classes conform to the same protocol and declare a generic array that takes <protocol> objects. However, this can get tricky - imagine you have two separate structs and both conform to the same protocol. Struct A has a function named “testing” that takes an int and returns an int, while Struct B has a function of the same name that takes a string and returns a string. Say you randomly grab an object from your array, call the “testing” function on that object, and pass it an integer. If you grabbed an object typed as class B, you’d have a problem.

If you wanted to write a function that took a generic collection and returned the first element, regardless of the type, you could write something like this:

func firstFromCollection<T: CollectionType>(a:T) ->(T.Generator.Element)?{

    guard !a.isEmpty else {

        return nil

    }

       return a.first 

}

We can call it for arrays and strings like so:

 

print(firstFromCollection([1,2,3,4]))

print(firstFromCollection(“abc".characters))

 

Swift is a protocol-oriented language, and GeneratorType is a protocol used with collections; it is used for iterating over collection types. Typical "for in" loops actually use generators under the hood in swift. Take this time to read more about generators, lazy mapping functions, guards and collection types. 

Check out my github for swift code (heavily commented, of course) to implement a generic linked list. When you create an instance of the data structure, you specify the element type! Generic programming FTW! Link to code here

Pay attention to one particular function in the generic linked list:

func replaceItemAtIndex (position: Int, withValue value : T).

Notice the second argument has "withValue value:T" - the "withValue" is referred to as an external parameter name. This means that a user of the function will see "withValue" as an argument being passed, which is slightly more explicit and readable than just "value". Internally, we use "value" to define our function. Think about which you'd prefer to see - 

replaceItemAtIndex(position:Int, value:T) or replaceItemAtIndex(position:Int, withValue:T)

The latter simply reads more like English; this is just a personal preference, but coming from Objective-C, I'm a fan of this. 

NOTE - The code for my linked list includes a CustomStringProtocol extension which allows for me to print the contents of my list by using the "print" function as in print(linkedList). Before doing this, I had to write a custom print function. Additionally, I've made use of subscripts (including setters and getters). This allows for array and dictionary like access to elements, either to get the element or set it - this looks like linkedList[0] to get the head node or linkedList[0] = "New" to replace the head node. Internally, the subscript just calls my getItemAtIndex function and my replaceItemAtIndex function. This exercise was extremely helpful in my getting used to general swiftyness, utilizing some of the important changes to the language when moving over from Objective-C. 

Writing pure swift code allows me to focus on new syntax and architecture rather than the Cocoa Touch API's. Learning how to use a framework doesn't mean you know best practices of the language itself - it's important to develop good habits early.

 

EXTERNAL RESOURCE:

This link is for a group of slides from a meetup presentation at Google HQ I went to on advanced generics! 

Categories & Extensions

Categories provide a useful way to split a single class into multiple files. If we have, let’s say, a House class, it may have some methods like “buildWithMaterials” and “returnAddress” which returns the home address.

There may be methods that involve home maintenance such as “needsPlumbingRepair” or “paintWalls”. Instead of throwing these in the general class definition of House, we can include them in a category called “House+Maintenance”. This is a separate file that imports the House class header. At runtime, the methods included in this category will become part of the House class.

Say we create a subclass of House, and that subclassed object will ultimately require maintenance; we can then import the header file of our category so that we have access to the API. This declutters the original House class interface and implementation files, and reduces the number of methods in one place.

Another way we can utilize categories is to use them to hide certain methods. While we can’t truly make a method protected (due to Objective-C’s dynamic runtime), we can simulate one using a category. If we use our House example, let’s say we want a protected method called “marketValue” - we can create a new category for House called House+Protected. In the interface file, we list our marketValue method so that it is publicly available to classes that import this category. In our implementation file, we write the code for our marketValue method, returning some NSNumber.

In some other class where we may create an instance of House, we would be able to send messages to that House object and call a method that is included in the House class interface file. Remember, we should NOT have to import the House+Protected interface file to access those methods; that is the whole point of protecting those methods. If we don’t import the file, we can’t call the marketValue method that we want to; if we try to send the marketValue message to our House object, the compiler will complain.

In order to access the marketValue method, we need to include access to it in a subclass of House. If we create a subclass of House and call it Condo, we inherit all of the properties and methods that House contains, and we can add additional methods here. To hide our marketValue method, we only import House+Protected.h into the implementation file - here we can create a method (lets call it assessValue) and in defining assessValue, we can call our protected method (marketValue). We include assessValue in our Condo interface file, so that it is publicly available - behind the scenes, that method is calling our protected method.

If we import Condo into our main.m file, for example, and try to call out protected method on an instance of Condo *london = [[Condo alloc] init] by declaring [london marketValue], we will get a compiler error. Instead, we should call the Condo method that itself calls our protected method by writing [london assessValue].

One way to completely bypass all of this work is to simply use the performSelector method - since all methods are actually public in Objective-C, there are only ways to simulate privatizing them, although you cannot make them truly protected.

We could just say:
SEL protectedMethod = @selector(marketValue);
if ([london respondsToSelector:protectedMethod]) {
    [london performSelector:protectedMethod];
}

Because our london object is an instance of Condo, and our Condo implementation file imports our protected method, the london object DOES actually respond to the marketValue method. As such, the above code will successfully compile.

Along with categories, we also have extensions. They serve a similar purpose to categories in that they allow you to add methods to a class outside of that class’s main files, the difference being that you don’t use a separate interface and implementation file to define those methods. Instead, you must implement the extension’s API in the main class’s .m file (not in the interface, or .h, file).

Much like categories, we can use extensions to declare private methods. In an implementation file of House, there is a section that says @interface House () and below there is the main implementation part of the implementation file which reads @implementation House.

We declare extensions in the same place we would declare an instance variable - between the @interface House () and @end tags. There, we could have:
-(NSInteger) marketValue {
    return [self currentListedPrice];
}

Since the method isn’t declared in the header, it’s not actually visible to other classes. We would need to call it from another method that is publicly visible (declared in the .h file).

iOS Questions: Pt. 1

In this (long) series of posts, I'll be posting groups of questions on iOS concepts as they come to me. I have no direction in the questions I'm choosing to answer, I'm simply going to go through notes of mine, find random questions, and answer them. If questions come up in this series that need deserve dedicated posts, I'll link to another blog post that discusses the topic more fully (think something like multithreading with GCD).

  1. Deep copy v. Shallow copy v. Assignment
    A shallow copy simply copies references to a place in memory. If object B is a shallow copy of object A, it simply copies object A's references (pointers) to objects in memory. Object B then refers to that same place in memory. Making changes to the data contained in one of the objects will impact the very same data being used by the other.

    In contrast, a deep copy makes copies of the actual data, storing it elsewhere in memory. Changing the data contained in one object will have no impact on the other.

    Assigning one object to equal another creates a pointer to a group of pointers, which point to memory. That is, if you have dictionary *A, and dictionary *B = A, then B contains a pointer to A's pointers, which point to the objects in memory. Although it will behave the same as a shallow copy, in a shallow copy, B would actually contain each pointer that A contains, still pointing to the same objects in memory. In either case, changing the values of one will change the other.
     
  2. Heaps and Stacks in iOS - how are they used?
    Almost all objects in iOS development are stored on the heap - they can be accessed globally and must be manually released and deallocated (pre-ARC). Stack objects disappear when a function returns and don't need to be manually managed. Imagine if stacks were widely used in iOS - what would happen if you needed to keep one around after the function returned? You could retain it, but after the function ended, that retain command would be pointing to a non-existent object - a dangling pointer. As a result, almost all objects are placed on the heap - the exception is blocks.

    Blocks are actually stored on the stack - that's why variables created within blocks aren't accessible outside of the block. They no longer exist when the block (which is treated like a function) ends; to continue to use them outside of the scope of the block, you must use a __block variable before entering the block. You can also create a copy of the block variable you wish to use, which actually places that copy on the heap.
     
  3. What are block variables used for?
    Using block variables (prefaced with a double underscore during declaration) is a way to access information created within a block outside of the block itself (remember that objects created within a block are stored on the stack, and as such disappear when the block exits). You could technically have an array created within a block, have an array property, and set the array property equal to the block array. However, this array is now accessible from everywhere else in the program since it's a global property - for the sake of data encapsulation, block variables are a better choice.
     
  4. What is a message in Objective-C? A Method? A selector?
    Messages and Methods are conceptually different in Objective-C but not so different practically.

    Objective-C runtime keeps a list of methods and functions it is aware of. When compiled, method names are turned into selectors (representations of the method name in memory). A function, objc_msgSend, is also called - it does a dynamic lookup, where it goes to the list of functions and finds the one we need (which is represented as a selector).

    This dynamic lookup allows us to do some interesting stuff - we can change the list at will. We can move values so that selectors point to different code than what was written. Additionally, the runtime asks objects if it recognizes methods before it actually sends a message. An object can then choose if it even wants to respond to that message.

    In short, when calling a method, there is little ambiguity - we know who is performing the action. The object did something.
    However, when we send a message, that ambiguity does exist. Something was done by the object. The emphasis is no longer on the object, but rather that “something” was done - when we send a message, we can’t be entirely sure what will respond, since dynamic lookup allows us to change certain things about our list of functions. The focus is more on the message itself.

Data Structures - Hash Tables

Associative arrays, structures which have meaningful keys associated with values instead of indexes being associated with values, are another abstract data type. The first rule to remember is that keys must be unique, but values can be duplicates. Custom objects can be stored as values, and implementations of associative arrays differ across languages; in Objective-C, we have the NSDictionary class. Underneath the hood of every associative array is a data structure known as a hash table.

A hash table is essentially a pair of arrays, one that stores a key and the other that stores an associated value. Entries in a hash table are called buckets, and hash tables begin with multiple buckets that are waiting to be filled with data. What determines which bucket will be used to store a value from our key value pair? The answer to that lies in understanding a core part of the hash table: the hash function.

   This diagram illustrates the basic idea behind a hash function. It takes in raw data and through some deterministic operation outputs an index.

 

This diagram illustrates the basic idea behind a hash function. It takes in raw data and through some deterministic operation outputs an index.

Imagine we have some raw data, perhaps a custom object, that we are going to store in a hash table. How does a hash function determine where to store that data? Hash functions can be written in many different ways, and some are much better than others. In its most generic sense, a hash function takes some data input (let's focus on the key in our key-value pair here), and performs some operations on it to generate a large integer. This is a way of purposefully representing that data in a shortened way; this is known as hashing, and the integer output is a hash or a hash value. There are two rules to remember when discussing hash functions: 1. Hashing is an irreversible process (it is not invertible) and 2. some information is lost when hashing.

A hash function should be able to generate some data from some input using the same set of rules every single time; we can say that hash functions must be deterministic. While a hash function should produce the same hash for two equal objects (allocated separately but holding duplicate data), two equal hashes do not necessarily come from the same data. When two different pieces of data result in the same hash value, we have what is known as a collision; we will discuss how to manage collisions shortly.

A hash function takes a key from a key-value pair and ultimately maps it to an integer within the range [0, N-1], where N is the capacity of the bucket array of our hash table. The goal is to use h(k), which represents the hash value of a key (where h is our hash function and k is the key), as an index of our bucket array. Depending on the hash function, our hash value for a key may be a very large integer; we must find a way to reduce that number down to a size relative to the size of the hash table. An example of a way to do this is by taking the modulo of the integer with regard to the number of buckets; this final output would represent the hash table bucket array index. We then map to that index of the array and store the value in our key-value pair.

The implication of all of this is that if we need to find a value and we have the key, we can just pass that key to our hash function, get our final hash value output, and index directly into the array using that value. Indexing into an array, remember, is a random access operation (and is a constant time operation). Although similar to arrays in that we have random access, it is unlike both arrays and linked lists in that there is no concept of a linear/binary search or traversing (which are all linear time functions).

Collision Management

Hash functions sometimes result in collisions - this is when a generated hash value isn't unique enough between two different objects, resulting in our function attempting to map two different objects to the same bucket. One way to deal with this is to have a bucket where multiple values are attempting to be stored simply hold an array or a linked list. This way, when you index into that bucket, you can either iterate through an array storing multiple objects, or you can traverse over a linked list to do the same. This solution to collisions is known as separate chaining.

Next time we'll be covering trees and graphs (briefly). After that, it's all about Cocoa Touch.

Data Structures - Linked Lists

Before we dive into a new data structure, be aware that there is a great quick guide on Computer Science concepts (data structures and common algorithms) that you can access here.

Now, let's discuss another data structure that builds off of our knowledge of arrays: the Linked List.

The linked list is made up of units called nodes, and each node contains two things: a value, and a pointer to the next node so that we can find it in memory. Unlike arrays, linked lists are collections of data stored that are not stored contiguously. Because of this, linked lists can grow in size very easily; you can simply add on an object to a linked list anywhere in memory, and the previous node will simply contain a pointer to the new node, wherever it may be.

Linked lists, as data structures, are often used to implement other abstract data types; they describe functionality more than implementation. The underlying implementation can be due to data structures such as arrays, hash tables, linked lists, etc. For example, stacks, queues, and associate arrays can be implemented using linked lists; they themselves are abstract data types, but are implemented using data structures.

Another way to differentiate between data structures and abstract data types is that data structures are looked at from the perspective of the person implementing them; an abstract data type is viewed from the perspective of a user, meaning that the data is described by its behavior.

Just as a developer can have variations of a standard array (think dimensions), we can have doubly linked lists, where nodes contain pointers to both the next node and the previous node, and even circular linked lists, where instead of the last node pointing to a "dummy" node, as is sometimes done, the last node contains a pointer to the first one. Linked lists support sequential access; that is, you must traverse from the "head" node through the list until you find your desired object. This means that indexing in a linked list is a linear time function; the Big O notation for linked list access (and also searching) is O(n).

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;  This is a diagram illustrating the insertion of a node to an existing linked list

                              

This is a diagram illustrating the insertion of a node to an existing linked list

 

However, insertion and deletion are optimized operations in linked lists - they are both constant time functions (O(1)). If something needs to be inserted, we simply redirect the pointers in the linked list so that the intended previous one points to the new node, and the new node points to the intended next one. The same goes for deletion; we can simply redirect the node pointers so that nothing points to a certain node, and deallocate it from memory.

*note - if you don't already have a pointer to a node that you want to delete, the actual time for deletion would be (search_time) + O(1). The O(1) represents the constant time deletion. The same goes for insertion.

Arrays, at least in my experience, are used far more often than linked lists. There isn't even support for linked lists in Objective-C, although you can implement your own in one of a few ways; check stack overflow for more information on replicating linked list functionality in Objective-C.

One time a linked list may be useful is when using it to implement, for example, a queue. As we will learn, queues are FIFO (first in, last out), which means that the first object to be placed on a queue is the first one to exit the queue. Because insertion is a constant time function for linked lists, inserting to and deletion from the middle of a linked list is optimized. For a priority queue, in which you may break the FIFO rule when something has a high priority flag placed on it, linked lists would work nicely since you can remove directly from themiddle of one.

Next time we'll be covering very simple abstract data types - stacks and queues!

Data Structures - Arrays

In the context of data structures, I'll be writing about them in a language-agnostic fashion. However, I WILL be sure to address how they are used in Cocoa Touch programming (the NS classes that the structures correspond to, differences between Objective-C and other languages, etc). Over the next few blog posts, I'll be going over Arrays, Links, Stacks, Queues, and Hash Tables. We will be discussing some of the main uses of data structures which ultimately determine which is the best to use for your situation: indexing, searching, inserting, deleting, and sorting.

Side note: I finally took the time to find the origins of shell, terminal and console. First of all, most Macs use bash as their shell. The shell is just the program that processes commands and returns output. The terminal a wrapper program that contains and runs the shell program. A console is a special kind of terminal used to communicate with your operating system at a low level (not standard on Mac machines).
Today, we use terminal emulators (the terminal program on a Mac). These are virtual terminals that replace the terminals of years past, which were little more than a keyboard and a monitor used to type in commands, pass them to the shell, process them with the computer, and return an output. Today we more commonly use graphical user interfaces to interact with our computers.

Side note aside, let's begin by discussing a very commonly used data structure, the array.

Arrays are ordered collections of data stored in memory contiguously, which means that a whole block of memory is allocated for an array. For this reason, arrays cannot grow in size easily, since there may not be additional memory RIGHT NEXT to the block of memory already set aside for an array. This leads us to our first point, which is that arrays are immutable (for the first Objective-C specific point on this, see #1 at the bottom of this post).

Arrays can be one-dimensional, two-dimensional (an array of arrays), three-dimensional (an array of two-dimensional arrays), etc. It just depends on how you want to structure your data. They can even be jagged (a two-dimensional array where each array is not necessarily the same length).

Arrays allow for random, or direct, access. This means that we can index right into an array. Our computer already knows the size of an array, so if we say "take array A and go to the 4th index", our computer immediately knows where in memory that array is located, how much space is taken up, and can jump right to the 4th index. This means that indexing in an array is a constant time function, meaning that additional inputs of data in an array structure don't lead to increases in time complexity. The Big O notation for array indexing is O(1).

Searching, on the other hand, is a memory intensive process, as the computer must iterate over every item in the array until a desired object is found. It doesn't know what is being stored in an array, only WHERE in MEMORY it is stored. The best case scenario is that the object we are looking for is at index 0, and the worst case is that it is the last index in the array. Either way, we must assume the worst case scenario, which means iterating over the entire array until we find our object. Searching in an array is a linear time function, meaning that additional inputs result in a corresponding increase in time complexity. The big O notation for array searching is O(n).

We can optimize searching by using a binary search - we sort an array in ascending or descending order, check the middle element, and if that value is higher than our desired value, we search the lower half of the data values. This process continues, halving the data set each time, until we find (or don't find) the desired value.

1. With regard to the Objective-C implementation of arrays, although it is more memory intensive to do so, the language offers the NSMutableArray class (along with NSArray and CFArray. This is an array that DOES have dynamic sizing; however, things change when you're using this kind of data structure. If you are adding objects to a mutable array, there MAY be space in the contiguous memory block for that additional data. However, all the other objects must be shuffled around to keep the indices in place. Additionally, if there is no space in that contiguous block, the entire array must be copied over with the new values to a new contiguous block somewhere else in memory.
As a result, if data is subject to frequent change / growth in size, perhaps there are other data structures available better suited to a developer's needs.

Arrays are good to use when access is prioritized (constant time function). If you are able to insert data into an array (mutable) and you are inserting at the end, THEN insertion is a constant time function (we can jump directly to the last index of the array in memory). However, if end-insertion is not happening, the time complexity is linear (O(n)) since we must iterate through the entire array to find our desired point of insertion.
 

AN ADDITIONAL POINT ON RANDOM/DIRECT ACCESS (INDEXING) OF AN ARRAY BEING CONSTANT-TIME:

The math involved in indexing an array is addition and multiplication - these are both considered to be constant time operations. In other words, adding 1000 + 1000 + 1000 doesn't take much more time than adding 15 + 15 (and the same goes for multiplication). Higher values and more of them don't corresponding to a linear increase in the time required for the computation, so they are considered constant.

A crude representation of an array in a contiguous memory block, with each collection member mapped to it's place in memory

A crude representation of an array in a contiguous memory block, with each collection member mapped to it's place in memory

Let's say we want our program to access the third index in the array (index 2). We know that the first index has a memory address of x004. An address is 2 bytes large in this case, so each address increases by 2 here. To get the index we want, we must compute:
(address of first index) + ((desired index) (2 bytes per index in memory))
This comes out to (x004) + (index 2 * 2) -> (x004 + 4 bytes).
This leads us to the address x008, which holds the values 99 (the value of the third index, or index 2).
As you can see, multiplication and addition are constant operations required to index into an array; this is the reason why array indexing is a constant-time function.

*if you see me using the term deterministic in future posts, take it to mean that if given the same input each time, an algorithm will return the same output.