Thursday, June 1, 2017

Trigger + Process + Workflow = Recursion Hell?

As your Salesforce org matures, chances are you'll find yourself trying to untangle Apex triggers, processes created with Process Builder, and workflow rules. Specifically when field updates are involved, predicting the outcome from the order of operations can be a pita, especially because the documentation still leaves room for questions in cases involving recursion.

Follow the two rules below for a reduced-pain implementation.

If you're updating fields on a in Process Builder, and you've marked the "Recursion" checkbox, know that every time the record is updated by the process, before and after trigger events will fire again. This is also true for updates made by a process invoked as an action by another process.


So all in all, remember that a single Apex trigger could run 14 (fourteen) times for a single DML operation! If you're mixing triggers with processes and workflow rules, make very sure your business logic in triggers will survive recursion.

Monday, May 1, 2017

getValue() getter method vs. { get; } shorthand

Salesforce's { get; set; } syntax has been around for a long time and is a time-tested, reliable way to define properties in Apex. But after testing its usability and limitations in Spring '17,  I've decided that explicitly declaring traditional getter and setter methods should be preferred over using the convenient { get; set; } syntax.

The primary reason is that the only way to expose a property in a custom Apex class for use with Lightning Components is to use the @AuraEnabled annotation, and this annotation only works on a traditional getter method such as String getName().

The secondary reason is that the developer also has the option to either call the getter or access the private field directly from other methods in the class, which is not possible when using { get; set; }.

Wednesday, December 14, 2016

caffe.io.load_image Quick Facts

Quick facts on the numpy.ndarray object returned by caffe.io.load_image.

  • The array's shape is (height, width, 3)
  • The last shape value of 3 represents three color channels, in RGB order. This is important because OpenCV's imread function gives channels in BGR order.
  • The array has dtype=float32 with values in range 0.0-1.0. Again, this is important because OpenCV's imread function gives an array with dtype=uint8 with values in range 0-255.

I'm publishing this so I don't have to re-learn this "truth" every time I'm dealing with image loading and conversions.

Tuesday, December 6, 2016

What Code Belongs in an MVC Controller

The purpose of a controller is to act as a conduit between each user interaction and system response. Typically this involves three steps:

  • readRequest(). For a web server this means reading the inbound HTTP request, analyzing the headers, taking care of authorization.
  • doSomething(). Now that the server knows what it's being asked to do, the server can go ahead and do something useful.
  • writeResponse(). After the server has finished its job or kicked off a long-running process, it should write a response back to the user to let the user know how things went.

In a different sense, a controller's action method is just a wrapper for a function that executes actual business logic, a wrapper that translates an HTTP request into function args.

This setup makes sense to me, but what other approaches are there to writing good controllers? Please share your comments.

Wednesday, November 9, 2016

Install Anaconda 2 to /opt/anaconda2

By default, Anaconda 4.2 for Python 2 will install itself to the user's home directory on Linux. This is great for local development, but for server-side deployment and testing it's better to install to a shared location.

The install docs are pretty vague about how to set this up, saying simply, "Install Anaconda as a user unless root privileges are required." The way I've made this work easily on an Amazon EC2 running Ubuntu 16.04 LTS is as follows.

  1. Download the appropriate installer
  2. Install as a superuser with sudo bash Anaconda2-4.2.0-Linux-x86_64.sh
  3. Install to /opt/anaconda2 and prepend the install location to PATH in ~/.bashrc
  4. Change the target directory's group ownership to ubuntu and grant g+w permission for the directory and all its subdirectories

In short, something like this will work beautifully, allowing packages to still be installed simply using conda install or pip install.

Wednesday, July 27, 2016

caffe.io.load_image vs. cv2.imdecode

Interesting note to self... the following code produces the same results, one using a chain of OpenCV methods and the other using a concise Caffe method.

Thursday, May 12, 2016

Mix Groovy and Java in STS 3.7.3.RELEASE

To mix Java and Groovy together in the same Spring Starter project, a few changes can be made to the project's properties and paths. By default, when a Java project is created it only looks for source files and test files in the src/main/java and src/test/java directories.

Add src/main/groovy to Java Build Path


  1. Create the src/main/groovy directory
  2. Right-click the project in Package Explorer, then click Properties
  3. Click Java Build Path in the left sidebar
  4. Click Add Folder... to add src/main/groovy


Add Groovy libraries to classpath


  1. Right-click the project in Package Explorer
  2. Expand Groovy, then click Add Groovy libraries to classpath


Only include groovy-all


At this point, trying to run the Spring Boot app will generate errors that look like this.

...
org.apache.catalina.core.ContainerBase : A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]]
...


The errors can be resolved by removing the extra libraries STS automatically added in the previous step.
  1. Right-click Groovy Libraries in the project
  2. Click Properties
  3. Select "No, only include groovy-all" in the first panel that asks, "Should all jars in the groovy-eclipse lib folder be included on the classpath?"

Monday, May 9, 2016

Mix Groovy and Java in IntellJ IDEA

To mix Java and Groovy together in the same IntelliJ IDEA project, a simple change can be made to the project's .iml file. By default, when the project is created it only looks for source files and test files in the src/main/java directory.
By adding two sourceFolder elements IntellJ will automatically find and compile .groovy files in the Groovy directory.

Wednesday, May 4, 2016

Go to Assembla Ticket

Here's a simple JS code that can be converted into a bookmarklet to quickly open an Assembla ticket.

Saturday, April 30, 2016

HTTP/REST API Specifications

Now that I have the pleasure of designing new APIs to support both B2C and B2B use cases, my first thought is to standardize. In the case of APIs, I believe standards reduce the burden of maintenance and improve the ease of integration.

To that end, I sought to define guidelines for all operations. These are not new or novel, but I need these to set shared expectations with my team. And we start with a few core principles:

  1. Follow REST conventions for CRUD operations
  2. Use JSON in all request and response bodies (Content-Type: application/json)...
  3. Except where binary content is involved (Content-Type: multipart/form-data)

REST conventions


Striving to KISS:

Error response body


Success responses will contain appropriate data for the request, but all error response bodies look alike. At least structurally, having only one field.

  • Array<Error> errors - An array of any errors encountered while executing the operation. This field is always present for an error state (any non-200 HTTP status).


Each errors element has the following fields:

  • int code - "for programmatic consumption" (ref Braintree)
  • String message - "for human consumption" (ref Braintree)
  • String component - Whatever we're blaming for the error

Tuesday, February 2, 2016

Get Full DateTime Format (GMT) in Apex

To get the full DateTime format in GMT such that it's compatible with Apex methods like JSON.deserialize(),  the most accurate method is to call DateTime.formatGmt().


For comparison, below are some other alternatives for generating similar String values for a DateTime object.

Monday, January 11, 2016

6 Reasons You Got the Generic Error Page

Sometimes in Visualforce, you'll have debug logging enabled for a community or portal user, but loading a page fails with no trace of what went wrong. When you look in the debug log for an explanation, all you see is that Salesforce successfully loaded the Generic Error Page configured for the site.


Before logging a case or asking a fellow developer for help, check for the following 6 common culprits.

  • Are you missing any static resources, {!$Resource.__}?
  • Are you missing any custom labels, {!$Label.__}?
  • Are you missing any custom permissions, {!$Permission.__}?
  • Are you missing any page references, {!$Page.__}?
  • Are you missing any custom controller/extension properties bound on the page? This can be a problem when deploying packages among sandbox orgs.
  • Are you missing any custom controller/extension actions bound on the page?

What I've also discovered, interestingly enough, is that for some reason using MavensMate v6.0.0 with Sublime Text 3, saving a Visualforce page or component skips the validation for global variables used in Visualforce.

Wednesday, November 18, 2015

Why Consultants Should Learn to Type

$100,000,000,000.

How's that for an answer to the title question? If you're wondering where that came from, the answer's simple. Let's conservatively estimate the size of the consulting industry to be $200 billion[1][2]. Without adding more people, what would happen if the productivity of people already in the industry increases by 50%? That 50% would amount to $100 billion, and that's the reason I believe all consultants should learn how to type.

Okay, lest my alma mater revokes my MBA, I'll admit right now my posit is a gross exaggeration based on unrealistic assumptions. So I'm off... but by how much?

50% is bogus. Or is it?


"The average person types between 38 and 40 words per minute... However, professional typists type ... upwards of 65 to 75 WPM."[3] The difference between the professional typist and the average person is simply that the professional typist learned to type. And my adjusted typing speed is 87 WPM, even with a wrist injury.



#yawn So what? So if you're not typing at least at the pace of a professional typist, consider yourself handicapped. Not in a derogatory way, simply such that you have much untapped potential.

Try an analogy


Take human speech as an example. Here are two clips below from two consultants pitching their firms to win a project. Which one would you hire?

Consultant A


Consultant B


You've probably figured out that it's the same voice recording, with the modified clip slowed to 67% of the original. But isn't the difference obvious? If you wouldn't hire the second guy because he spoke so slowly, why should you settle for the guy who types that way?

Keep in mind that typing is not rocket science. All it takes is scheduled time and practice.

Imagine if


... John the business analyst suddenly produces 50% more:
  • Detailed notes from meetings
  • Thorough functional requirements
  • Accurate and comprehensive documentation

... Jane the app developer suddenly produces 50% more:
  • Code
  • Code comments
  • Regression test automation

... Jill the project manager suddenly produces 50% more:
  • Personalized stakeholder communications
  • Risk reports and mitigation strategies
  • Next phase project proposals

... everyone shared and collaborated 50% more on enterprise platforms! (Chatter, anyone?)

The value gained from a 50% increase in typing speed is not proportional. I believe it's exponential. People who have speech impediments tend to speak less. You can guess that people who have typing impediments will avoid mediums where typing is the means to communicate.

And if your consultants, your organization doesn't care to address people's typing skills, aren't you leaving money on the table from your investments in typing-based platforms like SharePoint? Salesforce? Confluence?

Closing thoughts


I believe we owe it to ourselves, to our companies, to our economies, to learn this simple skill that catalyzes collaboration and value creation in the digital age. And if you're a company, teaching your people to type is a one-time investment that will pay lifetime dividends.

Saturday, October 31, 2015

Field Update + Update Records + Apex Trigger = ?

While the "Triggers and Order of Execution" page in the Winter '16 Force.com Apex Code Developer's Guide gives good information about the high-level order of operations, developers reading it are still unclear about how complex interactions unfold across objects that use workflow rules, Apex triggers, and Process Builder ("PB") processes.

Let's take a relatively simple set of automation applied to a single object:

  • A workflow rule with a field update
  • A recursion-enabled process that updates the record which starts the process
  • An Apex trigger

When a transaction is processed, how many times do each of the above automation components execute in that single transaction? The answers below may surprise you/

Component TypeNum Executions
Workflow Rule1
Process6
Trigger8

The exact step order in which the components were executed is illustrated below.

Workflow RuleProcessTrigger
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

The takeaway should be that developers ought to be very careful when adding automation to an environment that uses "all of the above", meaning workflow rules, processes, and triggers.

Friday, September 4, 2015

Where's My Flow/Process in the Change Set?

If you've tried deploying flows or processes in a change set, especially in a large change set, you may have experienced some disorientation trying to find that in the Change Set detail page. For quick reference, the table below shows you what you should be looking for when examining an open change set vs. a closed change set.

Component Type Ordered By (in Open Change Set) Ordered By (in Closed Change Set)
Flows Unique Name Flow Name
Processes API Name Process Name

Monday, August 31, 2015

What's the Real Risk with Enabling Divisions?

In Salesforce, divisions are a means to improve query and search performance by partitioning data into logical buckets called "divisions". However, like with the Person Accounts feature, many admins may hesitate and think twice about enabling divisions due to this warning: "Enabling divisions is irreversible. After implementing divisions, you cannot revert back to a state that does not have division fields."

"Irreversible", huh? Well... what's the real risk with enabling divisions? I think there is no significant risk.

According to Salesforce Help (Summer '15), enabling divisions may affect (or not) nine key areas. But it seems like the effects can be easily negated or suppressed.

Area Reversal Strategy
SearchRevoke the "Affected by Divisions" permission.
List viewsRevoke the "Affected by Divisions" permission.
ChatterNot supported (i.e., affected).
ReportsRevoke the "Affected by Divisions" permission.
Viewing records and related listsNot affected.
Creating new recordsSet to the global division.
Editing recordsSet to the global division.
Custom objectsSet to the global division.
RelationshipsSet to the global division.

In short, if you want to give divisions a try, talk to a few people about enabling the feature. If the foremost argument against enabling divisions is simply that it's irreversible, go ahead and just enable it anyway (in a full sandbox first). If it doesn't work for you, you can always revoke the Affected by Divisions permission from all users.

Tuesday, August 25, 2015

Salesforce API Versions

In Salesforce, did you know that API version 34.0 corresponds to Summer '15? Or that API version 17.0 means Winter '10? If not, this simple table of API versions and release names may be useful to you.

API VersionRelease
34.0Summer '15
33.0Spring '15
32.0Winter '15
31.0Summer '14
30.0Spring '14
29.0Winter '14
28.0Summer '13
27.0Spring '13
26.0Winter '13
25.0Summer '12
24.0Spring '12
23.0Winter '12
22.0Summer '11
21.0Spring '11
20.0Winter '11
19.0Summer '10
18.0Spring '10
17.0Winter '10
16.0Summer '09
15.0Spring '09
14.0Winter '09
13.0Summer '08
12.0Spring '08
11.0Winter '08
10.0Summer '07

Friday, August 14, 2015

6 Salesforce Target Attributes Every PowerCenter Session Should Set

When using Informatica PowerCenter to perform ETL jobs that process large data volumes (think millions of records), there are at least six attributes that should be set for each session. And even when loading smaller data volumes, these attributes may still be worth setting to improve performance.

1. Max batch size = 10000


The assumption here is that the session will use the Bulk API, which is the fastest way to load data in Salesforce. Period. As of Summer '15, the maximum batch size for a Bulk API job is still 10,000 records. Let's take advantage of this.

2. Set fields to NULL = checked


With data loads, by default it's best to assume that a blank field means that there is no data from the source system to feed this field. In this case, whatever is currently in the field would be considered invalid, to be overwritten with a blank value during the feed.

3. Use SFDC Bulk API = checked


Self-explanatory.

4. Monitor Bulk Job Until All Batches Processed = checked


When chaining tasks inside a worklet or workflow, monitoring the bulk job until all batches are processed helps to ensure that a dependent task will start only after the predecessor task truly completes. Otherwise, not only would you increase the risk of encountering locking errors, you run the risk of the next task running in the context of stale data.

5. Enable field truncation attribute = unchecked


This is equivalent to the Allow field truncation setting in Salesforce Data Loader. Unfortunately still, as of Summer '15, using the Bulk API prevents us from using this automatic truncation option. So you should be aware that truncating values must be done by other means during the transformation, not the load!

6.Enable hard deletes for BULK API = checked


Why not? This significantly improves the performance of mass delete operations, by skipping the Recycle Bin and erasing the record immediately.

Tuesday, June 30, 2015

Salesforce Change Set Accelerators

Okay, so this post isn't really about web accelerators in the purest sense of the definition. But if you're frustrated with the experience of navigating and managing change sets in the UI, here are a few quick bookmarklets you can add to your browser to ease the pain.

To "install" a bookmarklet, simply drag the bookmarklet on to your browser's bookmarks bar. Or, in Internet Explorer, right-click the bookmarklet and click Add to favorites...

Last Tested Date: June 30, 2015 (Summer '15)

Change Set: Next


CS > Next

Ever notice that sometimes when you click Next to scroll through the pages of 25 components in a change set, the size of the table shifts and displaces the Next link? This simple bookmarklet clicks the Next link for you, without you having to move your mouse cursor.

Change Set: Previous


CS > Previous

Same as the Change Set: Next bookmarklet, but for the Previous link.

Add to Change Set: more (10,000 records per list page)


AtCS > more (10,000)

When adding components to a change set, especially for something like Custom Fields, you may have noticed two frustrating problems. First, clicking through multiple pages to find the record you want is a pain. Second, multiplying the pain is the fact that what you select on one page is lost when you switch to a different page. This bookmarklet sort of solves the problem by upping the size of the list to 10,000 records, which usually is enough to allow you to select and add all components of the same type at once.

Other tips


If you're planning to create a large change set as you build out your solution over multiple days or weeks, bookmarking the change set's detail page in your sandbox org should be a quick win.

Sunday, May 31, 2015

Why Roll-Up Summary Requires Master-Detail

There's an idea that is 7 years old on the Success Community titled, "Eliminate Need for Master-Detail Relationship for Roll-ups", and users have voted the idea up to over 25,000 points.


I won't lie. I am most likely one of the 2,500 users who voted in favor of the idea, who thought I can't believe a no-brainer like this is still an idea and not a GA feature!

But today, while watching Who Sees What: Record Access Via Sharing Rules I suddenly realized that the reason for the delay could actually be ridiculously simple: Implementing this feature would violate the security design of the Salesforce1 Platform. How? By exposing information that would otherwise be hidden when OWD is set to "Private".

Scenario: VIP bank accounts


Let's say you're a system administrator for a fictitious financial institution called SaaS Bank. At SaaS Bank, there are everyday customers, and then there are VIP customers. VIP customers at SaaS Bank are high-profile individuals of great importance or great wealth, and a few notable VIPs include Barack Obama, Warren Buffett and Marc Benioff.

Understandably, VIPs get the white glove treatment. Their relationships are discretely managed by handful of bankers in the Private Bank department within SaaS Bank. These bankers are known as Private Bankers, and their number one priority is protecting their client's sensitive data, namely the clients' bank accounts and balances.

Data on customers' bank accounts are stored in a custom object labeled Bank Account, and all bank accounts serviced by SaaS Bank are tracked in this object.

The security requirement: Everyone at SaaS Bank should be able to see that a VIP is indeed a customer, but only Private Bankers (and their trusted colleagues) should be able to see the bank accounts held by a VIP.

The simple solution would be to make the Bank Account custom object private using OWD. And to relate Bank Account records back to a customer (i.e., a standard Account record), the object has a Lookup(Account) field, not a Master-Detail(Account) field.

The tricky requirement: All users want to see aggregate balance data for their customers.

So, if you could create a Roll-Up Summary field on the Account object that sums all balances for a customer, you would violate the private sharing model for the Bank Account object. A Roll-Up Summary field holds a value that is calculated based on all pertinent records.

In a private sharing model, how would... how could the Roll-Up Summary field hold a value that simultaneously shows 0 to a regular banker, the total balance to a Private Banker and something in between to other bankers with whom a Private Banker has manually shared records? The answer is it can't.

Objects with Master-Detail fields inherit the record access controls on parent objects. In this case you can present aggregate information on parent records via Roll-Up Summary fields, because if you have access to the parent record you also have access to all child records. But when you use a Lookup field instead because you need different record access for child records, well...

Okay, I get it, and I still need a workaround


I think there are legitimate reasons why an organization would want aggregate data to be automatically calculated and displayed through a Lookup relationship. There are two ways to work around this:
  • Ignore the built-in security constraint and leverage Apex running in system context to perform the roll-up. You can even use an off-the-shelf solution the free Declarative Lookup Rollup Summaries app or the paid Rollup Helper app.
  • Use a custom visual element (e.g., Visualforce page) to display a contextual roll-up, taking into account the current user's access to child records. This would leverage the with sharing keyword to accurately display different values to different users.

In the end, it would be nice of either of the above options were made into native features. And I'm guessing that the first option, the convenient and intentional deviation from an established security model, is what the 2,500 supporters of that 7-year-old idea want.