I currently access the Spark UI via Google Colab and ngrok, and I want to learn how to effectively use the Spark UI to detect bottlenecks in my Spark jobs. Could you explain the key features and tabs in the Spark UI that I should focus on for performance monitoring? Specifically, I’m interested in understanding how to identify issues such as slow stages, skewed data, and inefficient tasks using the UI. Any tips on what to look for and how to interpret the data in the UI to spot performance bottlenecks would be really helpful!