Big O Notation explained
Introduction Big O notation measures how fast your code runs as the number of inputs grows. When using O(1), it doesn’t matter the size of your input, the code will always take the same time to run. var users = new Dictionary { { 1, "Caique" }, { 2, "Maria" }, { 3, "Fernanda" }, }; var user = users[1]; In this example, it doesn’t matter the size of the collection, accessing a dictionary by key is constant. Always divides the problem in half at each step. The most common example is a binary search. int BinarySearch(int[] array, int target) { int left = 0; int right = array.Length - 1; while (left { "Caique", "Maria", "Fernanda" }; foreach (var user in users) { Console.WriteLine(user); } In the example above, if the input length grows by 10%, the execution time grows the same. Used in most common sorting algorithms. var numbers = new List { 5, 2, 8, 1, 3 }; numbers.Sort(); In the example above, the code is performing a merge sort. This complexity is also seen in algorithms like quicksort. This is the main villain in many codebases. The execution time grows very fast as the input length increases. var users = new List { "Caique", "Maria", "Fernanda" }; foreach (var user1 in users) { foreach (var user2 in users) { Console.WriteLine($"{user1} - {user2}"); } } It’s usually caused by a loop inside another loop. Always try to avoid doing something like that. If you code, it doesn’t matter if it’s frontend or backend, it’s great to know these algorithms to prevent or diagnose performance issues in your program.
