Apex Interview Questions – Proven 100+ Expert-Level Q&A for Fast Interview Success

Apex Interview Questions and Answers 2025: Master 100+ expert-level Apex concepts, triggers, SOQL, async Apex & integrations to crack senior Salesforce interviews.
Q1. Explain the significance of a Trigger Handler/Framework in a large application and how it promotes bulkification and recursion control.
Answer: A Trigger Handler centralizes the business logic away from the trigger definition itself. It promotes bulkification by ensuring logic is always written to process lists of records, not single records. It controls recursion by using a static boolean variable in the handler class that is checked and set to true at the start of the logic and reset at the end, preventing re-entry for the same trigger context.
Q2. Describe the role of the switch on Trigger.operationType structure in a well-designed trigger framework.
Answer: The switch on Trigger.operationType block ensures that the execution of handler methods is conditional based on the specific trigger event (e.g., BEFORE_INSERT, AFTER_UPDATE). This replaces verbose if/else if logic and makes the handler explicit, readable, and ensures the correct data context (Trigger.new, Trigger.oldMap) is passed to the specific handler method.
Q3. How do you prevent a trigger from running more than once in the same transaction for the same record (recursion)?
Answer: By utilizing a static boolean variable (e.g., hasRun) within a dedicated static helper class. In the handler: if (!TriggerHandler.hasRun) { TriggerHandler.hasRun = true; // Execute logic }
Since static variables maintain state throughout the transaction, the logic only executes on the first pass.
Q4. Why should DML operations and asynchronous calls be kept out of the before context, and what are the appropriate contexts for these actions?
Answer: DML operations in before context are pointless and consume limits because the record hasn't been committed yet; they are meant for after events. Asynchronous calls (Future, Queueable) should generally be in after contexts because they often rely on the record ID (Id) being generated, which only happens after the insert/update DML by the database.
Q5. Explain the core steps in the Salesforce Order of Execution where Apex triggers reside.
Answer: 1. Load the original record from the database.
2. Execute before triggers.
3. Execute custom validation rules.
4. Save the record to the database (without committing).
5. Execute after triggers.
6. Execute assignment rules, auto-response rules, workflow rules.
7. Execute before and after flows.
8. Execute escalation rules.
9. Execute roll-up summary field calculation.
10. Execute Queueable/Future jobs if invoked.
11. Database commits all DML operations.
Q6. What is the potential impact of having multiple triggers on the same object, and what is the best practice?
Answer: Multiple triggers can lead to unpredictable execution order (Salesforce doesn't guarantee the order), complexity, and difficulty in maintenance. Best Practice: Use a "one trigger per object" pattern, delegating all logic to a single, centralized trigger handler class that controls execution order and flow.
Q7. Define and differentiate between the Trigger.new and Trigger.oldMap context variables.
Answer: ○ Trigger.new: A list of the new versions of the SObjects. Available in insert, update, and undelete operations. Used to read or modify fields before the save (in before context).
○ Trigger.oldMap: A map of IDs to the old versions of the SObjects. Available in update, delete, and undelete operations. Used to compare the new values with the old values to determine what changed.
Q8. How would you implement logic that only executes if a specific field (Status__c) was changed from its old value to a new value in an after update trigger?
Answer: Use Trigger.new and Trigger.oldMap in conjunction: for (Account newAcc : Trigger.new) { Account oldAcc =Trigger.oldMap.get(newAcc.Id); if (newAcc.Status__c != oldAcc.Status__c && newAcc.Status__c == 'Active') { // Execute logic only if status changed TO 'Active'}}
Q9. Why are complex SOQL queries inside a trigger handler dangerous, and how do you mitigate this risk?
Answer: They are dangerous because if the trigger executes for 200 records, the query runs 200 times, hitting the 100 SOQL limit very quickly (a "non-bulkified" scenario). Mitigation: Always collect all necessary IDs or fields into a Set or Map, move the SOQL query outside the record loop, and query all related data in a single SOQL statement.
Q10. In a before trigger, if you modify a field on Trigger.new SObject, do you need to explicitly perform DML?
Answer: No. In the before context (before insert, before update), any modifications made to the records in Trigger.new are automatically saved to the database upon completion of the trigger context, as the transaction is still in the process of saving.
Q11. Explain the purpose of the addError() method on an SObject in a trigger, and when should it be used?
Answer: SObject.addError(message) is used in the before context to halt the entire transaction, prevent the record from being saved, and display the specified error message to the user on the UI. It should be used for validation checks that business logic dictates must prevent the save action.
Q12. What is a "context-specific" trigger handler, and why is it superior to a single monolithic handler method?
Answer: A context-specific handler uses separate, dedicated methods for each trigger event (e.g., onBeforeInsert(List<SObject> newRecords), onAfterUpdate(Map<Id, SObject> newRecords, Map<Id, SObject> oldRecords)). It is superior because it clearly dictates which data contexts are available, improves readability, and makes unit testing much easier as you can test each specific context method in isolation.
Q13. When chaining operations in a trigger, how would you ensure all DML for a single transaction completes before calling an external web service?
Answer: The external web service call must be executed asynchronously using a @future(callout=true) method or, preferably, a Queueable or Batch class. Since asynchronous jobs only start after the main transaction is committed, this guarantees all DML is finalized before the callout begins.
Q14. What is the difference between a validation rule and logic implemented via addError() in a before trigger?
Answer: Both prevent the save and display an error. Validation Rules are declarative, simpler, and run after before triggers. addError() in an Apex trigger is programmatic, allowing for complex, dynamic, or database-driven validation checks that the declarative tools cannot achieve (e.g., checking records in another object).
Q15. How does the implicit sharing recalculation impact trigger execution?
Answer: After DML, Salesforce recalculates sharing rules and group membership. This process happens after after insert/update triggers and can sometimes itself trigger further automation (like Roll-up Summary Field recalculations), which could potentially re-enter a trigger, though this is rare in a well-controlled framework. It's a reminder that the transaction boundary extends beyond simple Apex execution.
16. Why is it a bad practice to use SeeAllData=true in modern Apex tests, and what is the preferred alternative?
Answer: SeeAllData=true exposes all existing data in the org (including potential dirty or test-specific data) to the test class, making tests brittle, non-deterministic, and dependent on the org's state. Preferred Alternative: Explicitly create the necessary test data within the test method using Test.startTest()/Test.stopTest() blocks, ensuring tests are isolated and reliable.
Q17. What is the minimum code coverage requirement for deployment, and why is high quality code coverage (75%+) more important than just meeting the minimum? Answer: The minimum code coverage for deployment is 75%. High quality coverage is
more important because it ensures that:
1) Bulkified code paths (both single and bulk scenarios) are tested,
2) Negative paths (error handling, validation logic) are executed, and 3) All future and queueable asynchronous paths are verified, reducing the risk of runtime errors in production.
Q18. Explain the use of System.runAs() and when it is necessary in a test method.
Answer: System.runAs(User|Profile) is used to enforce user and object permissions, field-level security, and sharing rules in a test context. It is necessary when you need to test
○ Permission-based logic: If a trigger or class behaves differently for a standard user versus a System Administrator.
○ Sharing/Access: To confirm that a user with restricted sharing settings (e.g., OWD Private) can or cannot access specific records.
Q19. How do you test the limits consumed by a specific block of logic (e.g., a batch job's execute method)?
Answer: Use the Test.startTest() and Test.stopTest() block. All asynchronous processes, future methods, and limits consumed between these two calls are isolated to that block. You can then use System.assert along with limit methods (e.g., Limits.getDmlStatements(), Limits.getQueries()) outside the block to assert on the consumed limits.
Q20. What is a Test Data Factory, and how does it contribute to maintainable tests?
Answer: A Test Data Factory is a reusable utility class that contains static methods dedicated to creating common test records (Account, Contact, complex setups). It contributes to maintainability by:
○ DRY Principle: Eliminating repetitive data setup code across test classes. ○ Consistency: Ensuring all tests use standard, valid record data.
○ Speed: Sometimes using @isTest(isParallel=true) and ensuring data is created efficiently (bulkified DML).
Q21. When testing an after insert trigger, how do you verify that a separate asynchronous job (like a Future method) was successfully invoked?
Answer: You must wrap the trigger call in Test.startTest() and Test.stopTest(). ○ Test.stopTest() forces the immediate, synchronous execution of all queued asynchronous calls (Future, Queueable).
○ Inside the test method, after Test.stopTest(), you can then query the database to verify the changes made by the asynchronous job, or use System.assert to check if expected records were created or updated.
Q22. What is the purpose of the @testSetup annotation, and what is its primary advantage over creating data within each test method?
Answer: @testSetup marks a method that runs once for the entire test class before any individual test methods. Its primary advantage is speed and efficiency. Data created in @testSetup is inserted into the database only once and is then available, read-only, for every test method in the class, meaning each test method doesn't need to execute its own DML for common setup records.
Q23. How do you test an exception scenario where a custom exception is expected to bethrown (negative testing)?
Answer: You use the System.assertThrows() method, passing the target method/code block as a lambda, and the expected exception type. Test.startTest(); try { System.assertThrows(CustomException.class, () => { MyClass.methodThatThrows(invalidData); }, 'Expected a CustomException to be thrown.'); } catch (Exception e) { \// Assert that the exception message is correct (optional) System.assertEquals('Invalid data provided.', e.getMessage()); } Test.stopTest();
Q24. When writing test classes, what does the @isTest(isParallel=true) annotation do, and what is the risk associated with it?
Answer: This annotation allows the test class to run in parallel with up to 19 other test classes, significantly speeding up overall deployment and test execution time. Risk: It can cause intermittent test failures if test data is not properly isolated, or if the test relies on, deletes, or modifies existing setup data (hence the need to avoid SeeAllData=true and use @testSetup).
Q25. Explain the concept of Mocking in Apex for Callouts, and how is it implemented using HttpCalloutMock?
Answer: Mocking is the process of simulating the response from an external system during testing, eliminating the need for a real callout (which isn't allowed in tests). HttpCalloutMock is the interface used.
1. Create a class that implements HttpCalloutMock.
2. Implement the respond(HttpRequest request) method to return a dummy HttpResponse object with a specific status code and body.
3. In the test method, register the mock class: Test.setMock(HttpCalloutMock.class, new MyCalloutMock());
4. Invoke the method containing the callout logic.
Q26. What is Stubbing, and how does the Test.createStub(Type) method assist in testing dependencies without actual implementation?
Answer: Stubbing is similar to mocking but used for Apex dependencies (classes/methods) rather than HTTP callouts. Test.createStub generates a stub object from an interface or an abstract class. This stub allows you to implement specific methods using the Test.StubProvider interface to return predictable results, isolating the testing of your main logic from its dependencies.
Q27. When testing DML operations, what is the best practice for ensuring that the SOQL query limits used to retrieve related data are not being hit?
Answer: The best practice is to test two scenarios:
1. Single Record Test: Ensures basic logic works.
2. Bulk Test (200 records): This is critical. Create and insert 200 records in one go within Test.startTest(). The logic will run once for the entire bulk, and if it is not properly bulkified (e.g., SOQL inside a loop), the test will fail due to hitting the SOQL limit, proving the code is flawed.
Q28. How do you ensure FLS (Field-Level Security) checks are respected when testing a Visualforce or Lightning controller method?
Answer: FLS checks are not automatically enforced in test methods running as System.runAs(User). You must use security enforcement methods:
○ WITH SECURITY_ENFORCED: For SOQL queries.
○ Schema.SObject.getDescribe().fields.get(fieldName).isAccessible(): Explicitly check FLS before querying or manipulating data.
You then run the test method using System.runAs() as a low-privilege user and assert that fields they don't have access to are either returned as null or prevented from being updated.
Q29. What are the key elements you must assert on in a unit test to prove the quality of the code?
Answer: You must assert on:
1. Correct State Change: The expected data changes occurred (e.g., status field updated, related record created).
2. Negative Paths: Error messages or validation failure occurred when expected (addError, assertThrows).
3. No Unintended Side Effects: Verify that unrelated data remains untouched. 4. Bulk/Limit Consumption (Implicitly): Ensure the transaction did not hit governor limits.
Q30. Why should test classes be designed to use hardcoded IDs as little as possible, and what is the exception? ]
Answer: Hardcoded IDs (e.g., new Account(Id = '001A000001...')) are bad because they are only valid in the source org and cannot be deployed to another environment (sandbox/production), leading to deployment failures. The Exception: Hardcoded IDs are acceptable only for standard platform IDs that are guaranteed to exist, such as a Profile ID or Record Type ID retrieved dynamically in the test code.
Q31. What is the primary use case for an @future method, and what are its two main restrictions?
Answer: Primary Use Case: Executing long-running operations like web service callouts or complex DML operations in a separate thread, primarily to circumvent the mixed DML error or to perform callouts after a transaction commits.
Restrictions:
1. No Chaining: You cannot call a future method from another future method.
2. No SObject Parameters: Parameters must be primitive data types, arrays of primitives, or collections of primitives, forcing you to pass IDs and re-query data inside the method.
Q32. Describe the advantages of Queueable Apex over @future methods, particularly concerning limits and chaining.
Answer:
○ Object Parameters: Queueable methods can take SObject records or non-primitive types as arguments, avoiding the re-querying required by future methods. ○ Chaining: Queueable jobs can be chained by calling System.enqueueJob(new MyQueueableClass()) within the execute method, allowing for complex, multi-step asynchronous processes.
○ Job ID: You get a job ID immediately after enqueuing, allowing you to monitor the job's progress.
Q33. Explain the purpose of the three methods in the Database.Batchable interface and the flow of execution.
Answer: 1. start(Database.BatchableContext bc): Called once at the beginning. Used to collect the records or data to be processed (typically returns a Database.QueryLocator or an Iterable).
2. execute(Database.BatchableContext bc, List<SObject> scope): Called once for each batch of records (up to 200). This contains the core business logic and DML. It is executed in its own separate transaction with its own set of Governor Limits.
3. finish(Database.BatchableContext bc): Called once after all batches are processed. Used for cleanup, sending confirmation emails, or initiating the next batch in a chain.
Q34. How do you monitor and handle failures in a Queueable or Batch Apex job after it has been executed?
Answer: Both jobs provide a job ID that can be used for monitoring:
○ AsyncApexJob: Query the AsyncApexJob object using the ID to check its status (Completed, Failed, Processing).
○ Queueable: The job ID is returned by System.enqueueJob().
○ Batch: The job ID is returned by Database.executeBatch().
○ Failure Handling: For Batches, you can implement the Database.Stateful interface to log errors across transactions, or use the optional Database.AllowsCallouts if needed. The finish method is typically used to aggregate and report any errors logged during the execute phase.
Q35. What is the difference between implementing Database.QueryLocator and Iterable<SObject> in the start method of a Batch job?
Answer: ○ Database.QueryLocator: Used for simple SOQL queries that do not exceed 50 million records. It is highly optimized and bypasses the heap size limit by chunking the query results directly from the database. (Recommended for large data sets).
○ Iterable<SObject>: Used when the data source needs complex, programmatic generation or manipulation (e.g., retrieving external data, calculating an iterable list from multiple sources). The entire result set is loaded into memory, which can quickly hit the heap size limit for very large data sets.
Q36. How do you schedule a Batch Apex class to run daily at 2:00 AM?
Answer: You must create a separate Apex class that implements the Schedulable interface. public class MyDailyScheduler implements Schedulable { public void execute(SchedulableContext sc) { // Instantiate and execute the batch class Database.executeBatch(new MyBatchClass(), 200); } } // Then, use the System.schedule method to queue it up (e.g., in anonymous Apex or a utility method): String cron = '0 0 2 * * ?'; // Seconds, Minutes, Hours, Day of Month, Month, Day of Week, Year System.schedule('Daily Batch Job', cron, new MyDailyScheduler());
Q37. What is the 'Mixed DML Operation' error, and how do asynchronous methods solve it?
Answer: This error occurs when a transaction attempts to perform DML on setup objects (like User, Profile, Group, PermissionSet) and non-setup objects (like Account, Contact, CustomObject) in the same transaction. Asynchronous methods (@future, Queueable) solve this because they run in a separate transaction, allowing the first transaction to complete its DML on one type of object and commit, before the second transaction begins DML on the other type.
Q38. Explain the heap size limit restriction on asynchronous jobs and how Batch Apex helps mitigate it.
Answer: Asynchronous jobs have a heap size limit (typically 12MB). This limit restricts how much data (variables, collections, temporary objects) can be stored in memory during the execution of the method. Batch Apex mitigates this because:
○ The execute method runs in separate transactions. The heap is reset and limits are refreshed for every batch/chunk of records, ensuring that large-scale processing doesn't accumulate too much data in the heap across the entire job.
Q39. How do you ensure a Queueable job that makes a callout is properly executed when testing?
Answer:
1. Implement Database.AllowsCallouts on the Queueable class.
2. Use Test.setMock(HttpCalloutMock.class, new MyMock()); to register the mock callout response.
3. Call System.enqueueJob(new MyQueueable());
4. Wrap the enqueue call in Test.startTest()/Test.stopTest() to force synchronous execution of the job.
Q40. When would you use a Scheduled Flow over Scheduled Apex?
Answer:
○ Scheduled Flow: Use when the logic is primarily declarative (updating a few fields, creating simple related records) and does not require complex database queries, external integrations, or advanced exception handling. It's faster to develop and maintain by admins.
○ Scheduled Apex: Use when the logic requires programmatic complexity (complex calculations, custom data structures), high-volume data processing (using Batch Apex), or external web service integrations.
Q41. What is the critical limit when chaining Queueable jobs, and how does it prevent infinite loops?
Answer: The critical limit is 50 jobs in a chain. A Queueable job can only add one new job to the queue, and this can be done up to 50 times in a chain depth. This limit prevents the system from being overwhelmed by self-perpetuating or accidentally infinite asynchronous jobs.
Q42. Describe the role of Database.Stateful in Batch Apex.
Answer: Implementing Database.Stateful ensures that the state of instance member variables is maintained between transactions (between calls to the execute method). This is crucial for:
○ Error Aggregation: Collecting all records that failed DML across all batches. ○ Total Count: Calculating a running total or count across all processed records. ○ Note: The state is maintained only for instance member variables, not static variables.
Q43. What is the maximum number of records a single Batch Apex job can process, and why is the actual throughput often higher than the QueryLocator limit?
Answer: A Database.QueryLocator can retrieve up to 50 million records. The actual throughput is higher because, even though the limit is 50 million, the execute method's governor limits are refreshed for every chunk (up to 200 records per transaction), allowing the job to process the data over potentially hundreds of thousands of individual transactions without hitting the 10,000 DML statement limit, for example.
Q44. In the context of the platform, when is it safer to use Database.executeBatch instead of a custom recursive Queueable chain?
Answer: It is safer to use Database.executeBatch when dealing with large, undefined volumes of records (e.g., tens of thousands). This is because Batch Apex automatically handles the chunking, provides built-in failure monitoring, and the platform guarantees limit renewal for each batch. A custom Queueable chain is best for short, fixed-length multi-step processes where the process sequence is critical.
Q45. How do you ensure a Scheduled Apex job respects user permissions when performing DML?
Answer: Scheduled Apex runs by default in System Mode, meaning it ignores the current user's permissions and sharing rules. If you need to enforce sharing, you must: 1. Implement the class with with sharing.
2. Use the System.runAs(user) method inside the execute method to temporarily set the context to a user whose sharing settings you want to respect.
Q46. What is the single most common cause of hitting the 100 SOQL query limit, and how is it definitively resolved?
Answer: The most common cause is executing a SOQL query inside a for loop. Resolution: The code must be bulkified by:
1. Collecting all necessary IDs/keys into a Set or Map.
2. Moving the SOQL query outside the loop, using a WHERE Id IN :idSet or similar clause.
3. Processing the results using a Map<Id, SObject> for efficient lookup inside the loop.
Q47. What is the difference between synchronous and asynchronous Governor Limits, specifically regarding the total number of SOQL queries allowed?
Answer:
○ Synchronous Limits: Apply to Apex executed in real-time (e.g., controllers, triggers). Max SOQL queries: 100.
○ Asynchronous Limits: Apply to code executed later (e.g., future, Queueable, Batch). Max SOQL queries: 200.
This increase in limits for asynchronous execution acknowledges the heavier data processing typically done in background jobs.
Q48. When updating multiple records, how can you reduce the risk of hitting the 150 DML statement limit?
Answer: By using the bulk DML operations. Instead of running DML inside a loop for individual records, collect all records that need to be inserted, updated, or deleted into a List<SObject> and perform the DML operation (e.g., update recordList;) once outside the loop. Each list operation counts as one DML statement.
Q49. Explain the "Separation of Concerns" principle in Apex and how it helps manage complexity and limits.
Answer: Separation of Concerns dictates that different layers of your code should be responsible for distinct functionalities. For instance:
○ Trigger: Delegates control.
○ Handler: Controls flow and bulkifies data.
○ Service Layer: Contains core, reusable business logic and calculations. ○ DAO/Selector Layer: Handles all database access (SOQL).
This separation helps manage limits because all database access is concentrated in the Selector layer, ensuring no SOQL is accidentally placed inside a loop in the Service layer.
Q50. What is the "query more records than the governor limit allows" limit, and what is the maximum number of records that can be returned by a single SOQL query in synchronous Apex?
Answer: The limit on the number of records returned by a SOQL query is 50,000 for synchronous transactions. Note: This limit is significantly higher for Batch Apex using Database.QueryLocator (up to 50 million).
Q51. How can you ensure that complex, multi-layered business logic does not hit the CPU time limit (10,000ms synchronous / 60,000ms asynchronous)?
Answer:
○ Refactoring: Optimize loops, reduce nested logic, and use efficient Apex collections (Map, Set) for lookups instead of list iteration.
○ Delegation: Move CPU-intensive operations into asynchronous contexts (Queueable or Batch) to leverage the 60,000ms limit.
○ Algorithmic Efficiency: Use more efficient algorithms (e.g., binary search instead of linear search where appropriate).
Q52. When should you use the Database class methods (e.g., Database.update()) instead of the DML statements (e.g., update)?
Answer: Use Database methods when you want to handle potential DML errors gracefully without halting the entire transaction.
○ DML Statement: If an error occurs, the entire transaction is rolled back.
○ Database.update(records, false): The false parameter tells the system to allow partial success. You can then check the Database.SaveResult array returned by the method to identify which records failed and which succeeded, logging the errors and proceeding with the successful records.
Q53. What is the difference between the Total Heap Size limit and the Maximum Size of View State limit?
Answer:
○ Total Heap Size: The amount of memory an Apex transaction can use to store variables, collections, and objects. (Synchronous: 6MB, Asynchronous: 12MB).
○ Maximum Size of View State: Specific to Visualforce pages. It's the maximum size of the encrypted, hidden form field that stores the state of the page components and controller variables between requests. (Max: 135KB). This is a limit developers often hit when dealing with large lists or complex data structures in Visualforce controllers.
Q54. How do you safely and efficiently iterate over a large result set that exceeds the 50,000-record limit in a trigger or controller?
Answer: You cannot. Any synchronous Apex transaction must respect the 50,000 record limit. If you anticipate querying more than 50,000 records, you must switch the operation to a Batch Apex job and use the Database.QueryLocator in the start method, which bypasses this limit.
Q55. When designing a large-scale data migration, what is the safest batch size to use for Database.executeBatch and why?
Answer: The safest batch size is 200. This is the default and maximum allowed. Using 200 ensures optimal bulkification (you can perform up to 100 queries within that batch without hitting the limit once) and maximizes the use of Governor Limits while minimizing the number of separate transactions needed for the overall job.
Q56. What is the purpose of the System.LimitException class, and how do you handle it?
Answer: System.LimitException is the type of exception thrown when a Governor Limit (SOQL count, CPU time, DML count, etc.) is exceeded. You cannot catch a System.LimitException in a try/catch block within the same execution context that throws it, because the transaction is immediately terminated. Handling involves: Proactively writing bulkified code to avoid the limit in the first place, or ensuring the limit exception is handled gracefully in the calling context if possible (e.g., catching a failed DML in the parent transaction if it was called using a service).
Q57. How does the Salesforce platform handle recursion detection for flow/process builders that call Apex, and vice versa?
Answer: Salesforce automation systems can enter recursion loops when a trigger calls a flow, and that flow calls another flow or another trigger. The platform relies on the Order of Execution and the fact that the entire transaction stack runs until a commit or rollback.
To prevent infinite loops, developers must use:
○ Trigger Framework: Apex recursion controls (static boolean flags).
○ Flow/Process Controls: Entry criteria that evaluate the ISCHANGED() function to only fire the automation if specific fields are modified.
Q58. Why is using the Set collection often more efficient than using List for collecting IDs in bulkified code?
Answer: A Set guarantees uniqueness and provides faster lookup time (near O(1) performance) using the .contains() method, making it ideal for:
○ Collecting unique IDs from a trigger context to use in a SOQL WHERE clause. ○ Quickly checking if an ID has already been processed or is present in a list of keys. A List requires linear iteration (O(n)) to check for existence, which can quickly hit CPU limits in large loops.
Q59. Explain the importance of using for (SObject record : [SELECT ...]) syntax for SOQL queries over querying to a list and then looping over the list.
Answer: Using the SOQL For Loop (the first syntax) is generally preferred for queries that return more than 50,000 records in Batch Apex (Database.QueryLocator). The SOQL For Loop allows Salesforce to retrieve and process records in chunks, avoiding the heap size
limit by not loading the entire result set into memory at once. For synchronous code within the 50k limit, both methods are functionally similar regarding limits.
60. What is a "non-selective query," and why is it a significant performance and limit risk, especially in triggers?
Answer: A non-selective query is one where the filter (WHERE clause) does not use an indexed field, or the filter returns more than a certain percentage of records in the object (usually 10% for custom objects, or a lower threshold for very large standard objects). Risk: If a query is non-selective, Salesforce requires a full table scan, which is CPU-intensive and severely impacts performance. It can quickly hit the "Maximum number of CPU usage time" limit. Salesforce may throw a QueryException or LimitException if the query takes too long.
Q61. Describe the concept of a Semi-Join in SOQL, and provide a practical example of when to use it.
Answer: A Semi-Join uses a subquery in the WHERE clause to filter the parent records based on criteria in a related child object or an unrelated object. The subquery only returns IDs.
○ Example: Retrieve all Accounts that have at least one Contact in the state of 'CA'. SELECT Id, Name FROM Account WHERE Id IN
(SELECT AccountId FROM Contact WHERE MailingState = 'CA')
○ Advantage: It is highly efficient for filtering large data sets using relationship criteria.
Q 62. Describe the concept of an Anti-Join in SOQL, and provide a practical example of when to use it.
Answer: An Anti-Join is the opposite of a Semi-Join. It uses a subquery in the WHERE clause with the NOT IN operator to filter the parent records based on the absence of related data.
○ Example: Retrieve all Accounts that have no Contacts.
SELECT Id, Name FROM Account WHERE Id NOT IN
(SELECT AccountId FROM Contact)
○ Advantage: Efficiently finds records lacking a specific relationship.
Q63. What is the difference in structure and usage between SOQL and SOSL?
Answer: | Feature | SOQL (Salesforce Object Query Language) | SOSL (Salesforce Object Search Language) | | :--- | :--- | :--- | | Query Scope | Single standard or custom object. | Multiple objects, fields, and text-based fields. | | Filter Type | Highly structured WHERE clauses based on fields. | Text-based search across multiple fields/objects (like search engine). | | Return Type | List<SObject> (or List<List<SObject>> for parent-child). | List<List<SObject>> (a list of lists, one list per searched object). | | Primary Use | Reporting, fetching related data, structured queries. | Global search functionality, searching attachment/document content. |
Q64. How do you utilize Relationship Queries to bulkify data access in a trigger when moving from a child to a parent object?
Answer: Instead of looping through all child records and querying the parent ID one by one, use a relationship query to fetch the parent data alongside the child data in a single, bulkified step.
○ Example (Contact to Account):
SELECT Name, Account.Name, Account.Industry
FROM Contact WHERE Id IN :Trigger.new
The Account fields are directly available via dot notation on the Contact SObject.
Q 65. How does a developer retrieve records from a parent object and its corresponding child records in a single SOQL query?
Answer: Use a Nested Query (or Parent-to-Child Relationship Query). The child relationship name (e.g., Contacts) is used as an inner SELECT statement within the parent query.
○ Example (Account and all Contacts):
SELECT Id, Name, (SELECT Id, LastName FROM Contacts) FROM Account WHERE Id = :accountId
Q66. In terms of transaction integrity, explain the difference between delete myRecord; and Database.delete(myRecord, false);
Answer:
○ delete myRecord; (DML Statement): If the deletion fails for any reason (e.g., a required lookup field, validation rule), the transaction immediately stops and is rolled back entirely.
○ Database.delete(myRecord, false); (Database Method with allOrNone=false): If the deletion fails, the failure is recorded in the returned Database.DeleteResult object, but the transaction continues processing any other records/code. The DML that failed is not rolled back, allowing for partial success.
Q67. What is the purpose of the FOR UPDATE clause in SOQL, and when must you use it?
Answer: The FOR UPDATE clause locks the retrieved records (Account, Contact, etc.) for the duration of the current transaction. This prevents other concurrent transactions or users from simultaneously updating or deleting the same records. It must be used when:
○ You need to read a record's value, perform a calculation based on that value, and then update the record (e.g., updating a counter field), to ensure the record isn't changed by another process between the read and write operations.
Q68. When would you use the LIMIT and OFFSET clauses in SOQL, and what is the platform's restriction on using OFFSET?
Answer:
○ LIMIT: Used to restrict the maximum number of records returned by the query. Essential for performance and avoiding large result sets.
○ OFFSET: Used for pagination, specifying the starting row offset from the beginning of the result set.
○ Restriction: Salesforce has a hard limit on OFFSET. In Apex, the maximum offset you can specify is 2,000 records.
Q69. How do you query for data that includes deleted records (records in the Recycle Bin)?
Answer: Use the ALL ROWS keyword at the end of the SOQL query.
SELECT Id, Name, IsDeleted FROM Account WHERE IsDeleted = true ALL ROWS This is necessary for operations like cleanup, auditing, or retrieving data that needs to be restored.
Q70. Why should you avoid using the SELECT * equivalent in Apex (selecting all fields), and what is the performance risk?
Answer: Apex does not have a SELECT * equivalent, but selecting every field is possible. Risk: Querying unnecessary fields consumes heap size and impacts performance, especially for records with many custom fields. Best Practice: Always query only the fields you explicitly need, which is essential for efficient transaction processing and improved read performance.
Q71. Explain the difference between with sharing and without sharing class definitions.
Answer:
○ with sharing: Explicitly enforces the running user's sharing rules (OWD, Role Hierarchy, Sharing Rules, Teams) on record access. This is the default and recommended setting for custom Apex logic to prevent unauthorized data access.
○ without sharing: Explicitly bypasses the running user's sharing rules. The code runs in System Mode regarding record access. Used for utility classes or when the business requirement is to ensure all users can perform an action regardless of their record-level sharing.
Q72. What happens if an Apex class is defined without either the with sharing or without sharing keywords?
Answer: If neither keyword is specified, the class runs in an implicit mode that inherits the sharing setting of the class that called it. If the calling class is with sharing, the execution respects sharing. If the calling class is without sharing, it does not. If the class is the entry point (e.g., a Visualforce controller), it runs without sharing by default, but this behavior can vary slightly, making it best practice to always explicitly define the sharing model.
Q73. Describe the four levels of security checks a developer must perform to write secure Apex (CRUD and FLS).
Answer: Developers must check:
1. CRUD (Object Access): Can the user read/create/update/delete the SObject type? (e.g., Schema.SObjectType.Account.isAccessible())
2. FLS (Field Access): Can the user access a specific field on the SObject? (e.g., Schema.SObjectType.Account.fields.Name.isAccessible())
3. Sharing (Record Access): Does the user have access to the specific record? (Handled by with sharing keyword).
4. Input Sanitation/Validation: Is the user input clean and properly validated against business rules?
Q74. How do you programmatically check if the running user has Create access for the Account object and Edit access for the Account Name field?
Answer: Use the Schema methods:
// CRUD Check (Object Access) Boolean canCreate = Account.sObjectType.getDescribe().isCreateable();
// FLS Check (Field Access) \\ Boolean canEditName = Account.sObjectType.getDescribe().fields.getMap().get('Name').isUpdateable();
Q75. What is the WITH SECURITY_ENFORCED clause in SOQL, and what security checks does it automatically perform?
Answer: WITH SECURITY_ENFORCED is a modern SOQL keyword (API 45.0+) that automatically enforces FLS and CRUD permissions on the fields and objects referenced in the query.
○ Behavior: If the running user lacks FLS access to any queried field or CRUD access to the object, the query fails and throws a System.QueryException, preventing data leakage and eliminating the need for manual, verbose FLS checks before the query.
Q76. In a service class, if you define a method as public without sharing static void calculateTotals(...), but it is called from a trigger that is with sharing, what sharing rules apply?
Answer: The without sharing keyword is dominant and will be enforced only for that specific method's execution context. Therefore, the calculateTotals method will temporarily run without sharing (System Mode for record access), even though the calling trigger class is with sharing. Once the method returns, the context reverts to the caller's sharing setting.
Q77. How does the sharing model apply to Visualforce controllers defined with with sharing when accessing data via SOQL?
Answer: If the Visualforce controller is defined with sharing, all implicit and explicit SOQL queries executed within that controller and its helper methods will respect the running user's record-level sharing. This ensures that the data displayed to the user is filtered according to their permissions and sharing settings.
Q78. Why is it a security best practice to explicitly check isAccessible()even in classes that are defined as without sharing?
Answer: without sharing only bypasses record-level sharing (OWD/Rules), but it does not bypass FLS or CRUD. The running user's profile and permission sets can still restrict access to the object or field metadata. Therefore, you must use isAccessible() to prevent runtime errors (e.g., DML Exception) if the user tries to update a field they don't have FLS for.
Q79. What is the difference between UserMode and SystemMode for database operations in Apex?
Answer:
○ System Mode: The default execution context for Apex. The code runs with the full permissions of the platform (System Administrator), ignoring the running user's CRUD, FLS, and sharing rules (unless with sharing is used for sharing rules).
○ User Mode: The execution context respects all the running user's CRUD, FLS, and sharing rules. This is the behavior of formula fields, validation rules, and generally recommended for all UI-facing code to prevent security vulnerabilities. (Note: The WITH USER_MODE clause is a new feature in SOQL).
Q80. How can you leverage the Security.stripInaccessible() method to sanitize SObjects before DML, and what problem does it solve?
Answer: Security.stripInaccessible() is a utility method that removes fields/records the running user cannot access from SObject lists before DML is performed. ○ Problem Solved: It prevents runtime exceptions that occur when a user tries to insert or update records that include fields to which they lack FLS access. ○ Usage: You pass a list of SObjects to the method, and it returns a list where inaccessible fields are stripped (set to null) and inaccessible records are removed, allowing the DML to proceed successfully for the remaining accessible data.
Q81. Why is using the URL.getSalesforceBaseUrl().toExternalForm() method preferable to hardcoding the org's URL for callouts/integrations?
Answer: Hardcoding a URL (e.g., https://mydevorg.lightning.force.com) makes the code environment-specific and brittle. URL.getSalesforceBaseUrl().toExternalForm() dynamically retrieves the base URL of the currently executing org (sandbox, production, or domain-specific URL). This ensures the code is deployable and works consistently across all environments, adhering to the best practice of avoiding hardcoded environment-specific values.
Q82. Explain the functionality of the @InvocableMethod annotation in terms of security context.
Answer: Methods annotated with @InvocableMethod are executed by declaratively configured tools like Flow or Process Builder. By default, @InvocableMethod runs in System Mode, meaning it ignores the running user's FLS and CRUD permissions.
○ Best Practice: To ensure security, developers must include explicit FLS and CRUD checks (e.g., isAccessible()) within the body of the invocable method.
Q83. How do you implement a Custom Sharing Calculation using Apex, and what is the interface required?
Answer: Custom sharing calculations are implemented using the Database.Batchable interface, often called from the Schedulable interface.
○ Mechanism: The batch job executes the complex logic to determine who should have access to what records (beyond standard OWD/Rules) and then uses DML statements to insert, update, or delete records in the Custom Object Sharing Table (__Share objects). This is necessary to enforce custom, dynamic access logic.
Q84. What is the security implication of using dynamic Apex (e.g., Database.query(string)) compared to static SOQL?
Answer: Dynamic Apex is more vulnerable to SOQL Injection if the query string is constructed using unsanitized user input. If an attacker injects malicious WHERE clauses, they can bypass filters and access unauthorized data. Mitigation: Always sanitize or escape user input before using it in dynamic SOQL, or use the WITH SECURITY_ENFORCED clause on the query string (API 49.0+).
Q85. Why should Apex developers generally avoid using Schema.getGlobalDescribe() in a loop or frequently called method?
Answer: Schema.getGlobalDescribe() returns a map of all available SObjects and their metadata. Accessing or iterating over this map is a very expensive operation in terms of CPU time and heap consumption. Best Practice: If needed, call getGlobalDescribe() once in a static variable's initialization block, or within a dedicated utility class, and reuse the result to avoid repeatedly hitting the limits.
Q86. Explain the Polymorphism concept in Apex with an example of a list of an Interface type holding different implementation class instances.
Answer: Polymorphism means "many forms." In Apex, it allows a single type reference (like a parent class or an interface) to refer to objects of different types (child classes or implementing classes).
○ Example:
public interface PaymentGateway { void processPayment(); } public class PayPal implements PaymentGateway { public void processPayment() { /* PayPal logic */ } } public class Stripe implements PaymentGateway { public void processPayment() { /* Stripe logic */ } } List<PaymentGateway> processors = new List<PaymentGateway>(); processors.add(new PayPal()); processors.add(new Stripe());for (PaymentGateway p : processors) { p.processPayment(); // Different logic executes based on the actual object type }
Q87. What is the difference between a Transient keyword and the Static keyword on class variables?
Answer:
○ Transient: Used in Visualforce controllers to mark variables that should not be serialized as part of the View State. Used for temporary, non-essential, or large data that shouldn't be carried between page requests, improving performance by reducing View State size.
○ Static: Declares a variable that belongs to the class itself, not to an instance. It holds the same value across all instances of the class and retains state for the entire transaction (used for recursion control or caching data within the transaction).
Q88. When creating a custom exception class, what is the best practice for its inheritance structure?
Answer: A custom exception class should always extend the base Exception class. public class MyCustomException extends Exception { // Optional: Custom constructor public MyCustomException(String msg) { super(msg); } } This structure ensures the custom exception inherits all standard exception methods (like getMessage(), getStackTraceString()), making it easier to log and handle. 89. Describe the Singleton Design Pattern in Apex and provide a use case. Answer: The Singleton pattern ensures that a class has only one instance and provides a global point of access to it.
○ Implementation: The class has a private constructor (to prevent direct instantiation) and a static method that returns the single instance, creating it only if it doesn't already exist.
○ Use Case: Configuration Management. Using a Singleton to hold application-wide configuration settings (e.g., custom metadata or custom settings values) that are queried once per transaction, preventing redundant SOQL queries and consuming fewer limits.
Q90. What is the purpose of the @testVisible annotation, and where should it be applied?
Answer: @testVisible modifies the visibility of a private or protected class member (method, variable) so that it can be accessed and tested by test methods of the same class.
○ Application: It should be applied to private methods or variables that hold critical logic or state necessary for testing, allowing you to achieve higher test coverage without making production code public.
Q91. In the context of Apex, what is Dependency Injection (DI), and why is it crucial for unit testing?
Answer: DI is a design pattern where a class receives its dependencies (the objects it needs to perform its work) from an external source (like a factory or constructor) rather than creating them itself.
○ Crucial for Testing: It allows you to inject mock or stub implementations of external dependencies (like DAO classes or integration services) into the class being tested. This isolates the test to the class's specific logic, ensuring that the test doesn't rely on or execute the real dependency's logic or hit the database.
Q92. How do you use Custom Settings (Hierarchy or List) or Custom Metadata Types to store runtime configuration, and which is generally preferred in new development?
Answer:
○ Custom Settings (Legacy):
■ Hierarchy: Allows configuration based on user/profile/org. Fast access (no SOQL count).
■ List: Global list of settings. Fast access (no SOQL count).
○ Custom Metadata Types (Preferred):
■ Advantages: They are deployable using metadata API (unlike Custom Settings data), support packaging better, and can be queried using SOQL (FROM MyMetadata__mdt) with highly optimized queries that do not count against the 100 SOQL limit. Preferred for new development.
Q93. What is the Wrapper Class pattern, and when is it necessary to use it? Answer: A Wrapper Class is a custom Apex class used to combine standard or custom SObjects with non-database data (primitives, booleans, other objects) into a single object structure.
○ Necessity: It is necessary when you need to maintain additional, temporary, or derived data alongside the SObject, typically for display or processing. Common use cases include:
■ Displaying a checkbox (Boolean isSelected) next to an Account in a list. ■ Combining related data (e.g., an Account, its Primary Contact, and a calculated field) for a Lightning component.
Q94. Explain the functionality of the equals() and hashCode() methods in Apex, and why they are necessary when using custom objects as keys in a Map.
Answer:
○ equals(Object obj): Defines the rule for logical equality. You must override this to determine if two custom objects are considered "equal" based on the values of their fields, rather than just being the same object in memory.
○ hashCode(): Returns an integer that is used to determine where an object should be stored in a hash-based collection (like Map keys or Set values).
○ Necessity: Both must be overridden together. If two objects are logically equal (equals() returns true), their hashCode() must return the same value. Without this, a Map cannot correctly retrieve an object using a logically equivalent, but memory-different, instance as the key.
Q95. What is the Selector (or DAO) Pattern, and how does it relate to the Service Layer?
Answer:
○ Selector/DAO (Data Access Object) Pattern: A dedicated layer of classes responsible for all database interaction (SOQL, SOSL). It centralizes queries, manages complex relationship queries, and enforces FLS/CRUD checks for data retrieval.
○ Relationship to Service Layer: The Service Layer (which holds business logic) receives and processes data, but it never performs SOQL queries directly. Instead, it delegates all data retrieval tasks to the Selector layer, adhering to the Separation of Concerns principle.
Q96. How do you utilize the @AuraEnabled(cacheable=true) annotation, and what are the performance implications?
Answer: This annotation is used on static Apex methods exposed to Lightning Web Components (LWC) and Aura.
○ cacheable=true: Marks the method's result as cacheable on the client side (LWC/Aura) using the Lightning Data Service (LDS) wire service.
○ Implications: It significantly improves performance by:
1. Reducing Server Trips: The framework uses cached data instead of making repeated server calls. \
2. Concurrency: Marks the method as a read-only request, allowing the platform to run it faster and more efficiently.
Q97. What is the purpose of the System.debug(LoggingLevel.FINER, '...') call, and how does it help with performance profiling?
Answer: Logging at various levels allows developers to control the volume of debug messages written to the debug log.
○ LoggingLevel.FINER: This is a very verbose logging level, often used internally by the system.
○ Profiling: By strategically setting debug statements at FINER or FINEST around specific code blocks, you can selectively filter the debug log to focus only on those performance-critical sections during analysis, helping to pinpoint bottlenecks in the CPU time or SOQL execution.
Q98. What is the primary use case for the Database.upsert() DML operation?
Answer: Database.upsert() is used to perform a combined insert or update operation on a list of SObjects in a single DML statement.
○ Mechanism: It requires an external ID field to be defined on the SObject. If a record with the given external ID already exists in the database, upsert performs an update. If it doesn't exist, it performs an insert.
○ Use Case: Data migrations or integrations where you are receiving external data and need to either create new records or update matching existing records based on a unique external identifier.
Q99. How would you implement a simple Factory Pattern in Apex for creating different types of SObjects based on an input parameter?
Answer: The Factory Pattern centralizes object creation logic. public class SObjectFactory { public static SObject createSObject(String typeName) {
if (typeName == 'Account') { return new Account(Name = 'New Account'); } else if (typeName == 'Contact') { return new Contact(LastName = 'New Contact'); } else { throw new CustomException('Unknown SObject type: ' + typeName); } } }
Q100. Why is it essential to use Map<Id, SObject> for lookups in the Service Layer, even when only processing a single record?
Answer: Using a Map ensures the code path for a single record is identical to the code path for 200 records. While a simple List lookup is fine for one record, it fails the bulk test. By defaulting to the Map structure, you write code that is inherently bulkified and future-proof from the start, following the "Write once, run many" Apex principle.
