Wednesday, November 18, 2015

Why Consultants Should Learn to Type

$100,000,000,000.

How's that for an answer to the title question? If you're wondering where that came from, the answer's simple. Let's conservatively estimate the size of the consulting industry to be $200 billion[1][2]. Without adding more people, what would happen if the productivity of people already in the industry increases by 50%? That 50% would amount to $100 billion, and that's the reason I believe all consultants should learn how to type.

Okay, lest my alma mater revokes my MBA, I'll admit right now my posit is a gross exaggeration based on unrealistic assumptions. So I'm off... but by how much?

50% is bogus. Or is it?


"The average person types between 38 and 40 words per minute... However, professional typists type ... upwards of 65 to 75 WPM."[3] The difference between the professional typist and the average person is simply that the professional typist learned to type. And my adjusted typing speed is 87 WPM, even with a wrist injury.



#yawn So what? So if you're not typing at least at the pace of a professional typist, consider yourself handicapped. Not in a derogatory way, simply such that you have much untapped potential.

Try an analogy


Take human speech as an example. Here are two clips below from two consultants pitching their firms to win a project. Which one would you hire?

Consultant A


Consultant B


You've probably figured out that it's the same voice recording, with the modified clip slowed to 67% of the original. But isn't the difference obvious? If you wouldn't hire the second guy because he spoke so slowly, why should you settle for the guy who types that way?

Keep in mind that typing is not rocket science. All it takes is scheduled time and practice.

Imagine if


... John the business analyst suddenly produces 50% more:
  • Detailed notes from meetings
  • Thorough functional requirements
  • Accurate and comprehensive documentation

... Jane the app developer suddenly produces 50% more:
  • Code
  • Code comments
  • Regression test automation

... Jill the project manager suddenly produces 50% more:
  • Personalized stakeholder communications
  • Risk reports and mitigation strategies
  • Next phase project proposals

... everyone shared and collaborated 50% more on enterprise platforms! (Chatter, anyone?)

The value gained from a 50% increase in typing speed is not proportional. I believe it's exponential. People who have speech impediments tend to speak less. You can guess that people who have typing impediments will avoid mediums where typing is the means to communicate.

And if your consultants, your organization doesn't care to address people's typing skills, aren't you leaving money on the table from your investments in typing-based platforms like SharePoint? Salesforce? Confluence?

Closing thoughts


I believe we owe it to ourselves, to our companies, to our economies, to learn this simple skill that catalyzes collaboration and value creation in the digital age. And if you're a company, teaching your people to type is a one-time investment that will pay lifetime dividends.

Saturday, October 31, 2015

Field Update + Update Records + Apex Trigger = ?

While the "Triggers and Order of Execution" page in the Winter '16 Force.com Apex Code Developer's Guide gives good information about the high-level order of operations, developers reading it are still unclear about how complex interactions unfold across objects that use workflow rules, Apex triggers, and Process Builder ("PB") processes.

Let's take a relatively simple set of automation applied to a single object:

  • A workflow rule with a field update
  • A recursion-enabled process that updates the record which starts the process
  • An Apex trigger

When a transaction is processed, how many times do each of the above automation components execute in that single transaction? The answers below may surprise you/

Component TypeNum Executions
Workflow Rule1
Process6
Trigger8

The exact step order in which the components were executed is illustrated below.

Workflow RuleProcessTrigger
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

The takeaway should be that developers ought to be very careful when adding automation to an environment that uses "all of the above", meaning workflow rules, processes, and triggers.

Friday, September 4, 2015

Where's My Flow/Process in the Change Set?

If you've tried deploying flows or processes in a change set, especially in a large change set, you may have experienced some disorientation trying to find that in the Change Set detail page. For quick reference, the table below shows you what you should be looking for when examining an open change set vs. a closed change set.

Component Type Ordered By (in Open Change Set) Ordered By (in Closed Change Set)
Flows Unique Name Flow Name
Processes API Name Process Name

Monday, August 31, 2015

What's the Real Risk with Enabling Divisions?

In Salesforce, divisions are a means to improve query and search performance by partitioning data into logical buckets called "divisions". However, like with the Person Accounts feature, many admins may hesitate and think twice about enabling divisions due to this warning: "Enabling divisions is irreversible. After implementing divisions, you cannot revert back to a state that does not have division fields."

"Irreversible", huh? Well... what's the real risk with enabling divisions? I think there is no significant risk.

According to Salesforce Help (Summer '15), enabling divisions may affect (or not) nine key areas. But it seems like the effects can be easily negated or suppressed.

Area Reversal Strategy
SearchRevoke the "Affected by Divisions" permission.
List viewsRevoke the "Affected by Divisions" permission.
ChatterNot supported (i.e., affected).
ReportsRevoke the "Affected by Divisions" permission.
Viewing records and related listsNot affected.
Creating new recordsSet to the global division.
Editing recordsSet to the global division.
Custom objectsSet to the global division.
RelationshipsSet to the global division.

In short, if you want to give divisions a try, talk to a few people about enabling the feature. If the foremost argument against enabling divisions is simply that it's irreversible, go ahead and just enable it anyway (in a full sandbox first). If it doesn't work for you, you can always revoke the Affected by Divisions permission from all users.

Tuesday, August 25, 2015

Salesforce API Versions

In Salesforce, did you know that API version 34.0 corresponds to Summer '15? Or that API version 17.0 means Winter '10? If not, this simple table of API versions and release names may be useful to you.

API VersionRelease
34.0Summer '15
33.0Spring '15
32.0Winter '15
31.0Summer '14
30.0Spring '14
29.0Winter '14
28.0Summer '13
27.0Spring '13
26.0Winter '13
25.0Summer '12
24.0Spring '12
23.0Winter '12
22.0Summer '11
21.0Spring '11
20.0Winter '11
19.0Summer '10
18.0Spring '10
17.0Winter '10
16.0Summer '09
15.0Spring '09
14.0Winter '09
13.0Summer '08
12.0Spring '08
11.0Winter '08
10.0Summer '07

Friday, August 14, 2015

6 Salesforce Target Attributes Every PowerCenter Session Should Set

When using Informatica PowerCenter to perform ETL jobs that process large data volumes (think millions of records), there are at least six attributes that should be set for each session. And even when loading smaller data volumes, these attributes may still be worth setting to improve performance.

1. Max batch size = 10000


The assumption here is that the session will use the Bulk API, which is the fastest way to load data in Salesforce. Period. As of Summer '15, the maximum batch size for a Bulk API job is still 10,000 records. Let's take advantage of this.

2. Set fields to NULL = checked


With data loads, by default it's best to assume that a blank field means that there is no data from the source system to feed this field. In this case, whatever is currently in the field would be considered invalid, to be overwritten with a blank value during the feed.

3. Use SFDC Bulk API = checked


Self-explanatory.

4. Monitor Bulk Job Until All Batches Processed = checked


When chaining tasks inside a worklet or workflow, monitoring the bulk job until all batches are processed helps to ensure that a dependent task will start only after the predecessor task truly completes. Otherwise, not only would you increase the risk of encountering locking errors, you run the risk of the next task running in the context of stale data.

5. Enable field truncation attribute = unchecked


This is equivalent to the Allow field truncation setting in Salesforce Data Loader. Unfortunately still, as of Summer '15, using the Bulk API prevents us from using this automatic truncation option. So you should be aware that truncating values must be done by other means during the transformation, not the load!

6.Enable hard deletes for BULK API = checked


Why not? This significantly improves the performance of mass delete operations, by skipping the Recycle Bin and erasing the record immediately.

Tuesday, June 30, 2015

Salesforce Change Set Accelerators

Okay, so this post isn't really about web accelerators in the purest sense of the definition. But if you're frustrated with the experience of navigating and managing change sets in the UI, here are a few quick bookmarklets you can add to your browser to ease the pain.

To "install" a bookmarklet, simply drag the bookmarklet on to your browser's bookmarks bar. Or, in Internet Explorer, right-click the bookmarklet and click Add to favorites...

Last Tested Date: June 30, 2015 (Summer '15)

Change Set: Next


CS > Next

Ever notice that sometimes when you click Next to scroll through the pages of 25 components in a change set, the size of the table shifts and displaces the Next link? This simple bookmarklet clicks the Next link for you, without you having to move your mouse cursor.

Change Set: Previous


CS > Previous

Same as the Change Set: Next bookmarklet, but for the Previous link.

Add to Change Set: more (10,000 records per list page)


AtCS > more (10,000)

When adding components to a change set, especially for something like Custom Fields, you may have noticed two frustrating problems. First, clicking through multiple pages to find the record you want is a pain. Second, multiplying the pain is the fact that what you select on one page is lost when you switch to a different page. This bookmarklet sort of solves the problem by upping the size of the list to 10,000 records, which usually is enough to allow you to select and add all components of the same type at once.

Other tips


If you're planning to create a large change set as you build out your solution over multiple days or weeks, bookmarking the change set's detail page in your sandbox org should be a quick win.

Sunday, May 31, 2015

Why Roll-Up Summary Requires Master-Detail

There's an idea that is 7 years old on the Success Community titled, "Eliminate Need for Master-Detail Relationship for Roll-ups", and users have voted the idea up to over 25,000 points.


I won't lie. I am most likely one of the 2,500 users who voted in favor of the idea, who thought I can't believe a no-brainer like this is still an idea and not a GA feature!

But today, while watching Who Sees What: Record Access Via Sharing Rules I suddenly realized that the reason for the delay could actually be ridiculously simple: Implementing this feature would violate the security design of the Salesforce1 Platform. How? By exposing information that would otherwise be hidden when OWD is set to "Private".

Scenario: VIP bank accounts


Let's say you're a system administrator for a fictitious financial institution called SaaS Bank. At SaaS Bank, there are everyday customers, and then there are VIP customers. VIP customers at SaaS Bank are high-profile individuals of great importance or great wealth, and a few notable VIPs include Barack Obama, Warren Buffett and Marc Benioff.

Understandably, VIPs get the white glove treatment. Their relationships are discretely managed by handful of bankers in the Private Bank department within SaaS Bank. These bankers are known as Private Bankers, and their number one priority is protecting their client's sensitive data, namely the clients' bank accounts and balances.

Data on customers' bank accounts are stored in a custom object labeled Bank Account, and all bank accounts serviced by SaaS Bank are tracked in this object.

The security requirement: Everyone at SaaS Bank should be able to see that a VIP is indeed a customer, but only Private Bankers (and their trusted colleagues) should be able to see the bank accounts held by a VIP.

The simple solution would be to make the Bank Account custom object private using OWD. And to relate Bank Account records back to a customer (i.e., a standard Account record), the object has a Lookup(Account) field, not a Master-Detail(Account) field.

The tricky requirement: All users want to see aggregate balance data for their customers.

So, if you could create a Roll-Up Summary field on the Account object that sums all balances for a customer, you would violate the private sharing model for the Bank Account object. A Roll-Up Summary field holds a value that is calculated based on all pertinent records.

In a private sharing model, how would... how could the Roll-Up Summary field hold a value that simultaneously shows 0 to a regular banker, the total balance to a Private Banker and something in between to other bankers with whom a Private Banker has manually shared records? The answer is it can't.

Objects with Master-Detail fields inherit the record access controls on parent objects. In this case you can present aggregate information on parent records via Roll-Up Summary fields, because if you have access to the parent record you also have access to all child records. But when you use a Lookup field instead because you need different record access for child records, well...

Okay, I get it, and I still need a workaround


I think there are legitimate reasons why an organization would want aggregate data to be automatically calculated and displayed through a Lookup relationship. There are two ways to work around this:
  • Ignore the built-in security constraint and leverage Apex running in system context to perform the roll-up. You can even use an off-the-shelf solution the free Declarative Lookup Rollup Summaries app or the paid Rollup Helper app.
  • Use a custom visual element (e.g., Visualforce page) to display a contextual roll-up, taking into account the current user's access to child records. This would leverage the with sharing keyword to accurately display different values to different users.

In the end, it would be nice of either of the above options were made into native features. And I'm guessing that the first option, the convenient and intentional deviation from an established security model, is what the 2,500 supporters of that 7-year-old idea want.

Who Sees What: Record Access Via Roles (Corrected)

I'm admittedly a bit disappointed in the Who Sees What: Record Access Via Roles video. When I first read about the Who Sees What series, I thought it was awesome that salesforce.com decided to produce visual aids on the topic of security. As an alternative to the Security Implementation Guide, which boasts over 100 pages of official documentation on Salesforce security, I expected the videos to demystify the flexible and nuanced security controls available to system administrators.

However, after watching the Record Access Via Roles video, I feel that almost 70% of the content within is either misleading or simply inaccurate.

Note: I spent significant time staging test cases in an org before deciding to write this blog post, so please let me know if any part of my writing is technically wrong.

2:10 Three ways to open up access? Not quite...


In the AW Computing scenario, the presenter says that access to private Opportunity records can be opened up in one of three ways, quoted below:
  • "No access. In essence, this maintains the org-wide default of private. Users in this role would not be able to see opportunities that they do not own."
  • "View only. Users in a role can view opportunities regardless of ownership."
  • "View and edit. Users in a role can view and edit opportunities regardless of ownership."

This sounds good in concept, but as the video progresses to the demo portion to show how the three ways are actually implemented, the problem becomes clear. The presenter is actually misconstruing and wrongly explaining the Opportunity Access options.

In reality, the implicit access granted through the role hierarchy automatically solves the requirement presented in the video, and the Opportunity Access options are completely irrelevant to the hypothetical situation.

A more accurate explanation


Let's assume that the role hierarchy is set up as implied by the visual at 1:40 in the video.



Alan can see and edit whatever opportunities Karen and Phil can see and edit. The two reasons are that Alan is above Karen and Phil in the role hierarchy, and that OWD for the Opportunity object is configured to grant access using hierarchies (which as of Spring '15 you still cannot disable for standard objects). There are no more granular controls for records owned within a subordinate chain. If Karen can see a record, Alan can see that record. If Karen can edit a record, Alan can edit that record. Access via subordinates in the hierarchy is that simple.

So what do the Opportunity Access options do? Simply put, the options do exactly what the Role Edit page says they do.


Opportunity Access options have nothing to do with roles and subordinates. The selected option comes into play in situations, such as ones involving account teams, where a user from one branch in the role hiearchy owns an account, but a user from a different branch owns an opportunity for that account.

Try it yourself


Admittedly this will be really difficult if you don't have access to a sandbox org or a Partner Enterprise Edition org, but here's the idea.

Your role hierarchy looks something like the following:
  • CEO
    • SVP Products
      • Product Sales Manager
    • SVP Services
      • Services Sales Manager

Configure Opportunity Access for all roles in the hierarchy so that "users in this role cannot access opportunities that they do not own that are associated with accounts that they do own."

Set OWD for Opportunity to "Private", then do the following:
  1. Log in as a Product Sales Manager
  2. Create an account
  3. Create an opportunity
  4. Log in as a Services Sales Manager
  5. Verify that you cannot see the opportunity
  6. Create a new opportunity on the account owned by the Product Sales Manager
  7. Log in as the Product Sales Manager
  8. Verify that you cannot see the new opportunity created by the Services Sales Manager
  9. Log in as the administrator
  10. Change the Opportunity Access for the Product Sales Manager role so that "users in this role can view all opportunities associated with accounts that they own, regardless of who owns the opportunities."
  11. Log in as the Product Sales Manager
  12. Verify that you can now see the new opportunity created by the Services Sales Manager
  13. Verify that you you cannot edit that opportunity

Saturday, May 30, 2015

The Apex Ten Commandments (in Writing)

For anyone (like me) who couldn't find the slides to The Apex Ten Commandments recording referenced on the Architect Core Resources page, here's the written list:

  1. Thou shalt not put queries in for loops
  2. Thou shalt not put DML in for loops
  3. Thou shalt have a happy balance between clicks & code
  4. Thou shalt only put one trigger per object
  5. Thou shalt not put code in triggers other than calling methods and managing execution order
  6. Thou shalt utilize maps for queries wherever possible
  7. Thou shalt make use of relationships to reduce queries wherever possible
  8. Thou shalt aim for 100% test coverage
  9. Thou shalt write meaningful and useful tests
  10. Thou shalt limit future calls and use asynchronous code where possible

And I just have a couple of comments to add for color.

Comments on #7


I haven't tested this hypothesis yet, but... does this commandment still hold with large data volumes? Especially in the context of batch Apex? One certainty is that executing a single query like this is convenient for the developer. But when the query would return thousands of records that reference a small set of parent records, perhaps at larger data volumes a more efficient approach would be to split the query and leverage commandment #6 instead.

Comments on #10


The future annotation is slowly become obsolete with the introduction of the Queuable interface in Winter '15, although general guidelines for designing asynchronous automation still hold true.

Salesforce SSO in 5 Bullets

For my own edification, I want to summarize single sign-on options with Salesforce as succinctly as possible.

Using non-Salesforce credentials to get into Salesforce


This scenario can be simplified like this: A user already has a username + password combination stored in another system. The user wants to log into Salesforce using that existing username and password, instead of maintaining a separate username and password that's used only to log into Salesforce.

To achieve this, Salesforce allows:

Using Salesforce credentials to get into another app


This scenario can be simplified like this: A user is already logged into Salesforce. The user wants to launch another app without having to authenticate again. Instead, the other app should recognize the user and respond accordingly, based on the the user's Salesforce session.

To facilitate this, Salesforce offers:

Closing thoughts


A company can mix the two approaches above, so that Salesforce becomes an intermediate link in a chain that allows access to a third-party app using credentials maintained in a non-Salesforce system.

Thursday, May 28, 2015

3 Integration Practices Missing from White Paper

I just skimmed through the Integration Patterns and Practices white paper, which seems like a great primer on some time-tested integration approaches. However, two GA features plus a pseudo-integration option seem to be notably absent from the document.


Am I forgetting any other options? Please let me know!

Force.com Canvas


This is a well documented feature for which I'll summarize the key capabilities as of Summer '15:
  • Authentication via signed request or OAuth 2.0
  • Canvas app in Visualforce via apex:canvasApp component
  • Canvas app in the Publisher as a custom action
  • Canvas app in the Chatter feed as feed items
  • Canvas in the Salesforce1 app via the navigation menu

Lightning Connect


Instead of feeding data back and forth with integration jobs or real-time callouts, Lightning Connect offers a speedy alternative for surfacing external data in Salesforce, using the OData protocol. Simple scenario: Data stored in an on-premise database table can be exposed with a few clicks as an object in Salesforce, that looks and feels to end users like any other standard or custom Salesforce object. No code required!

Furthermore, Summer '15 added some really cool features to Lightning Connect, such as a native Salesforce Connector and the ability to access government and health data backed by Socrata Open Data Portal™. But in my opinion Lightning Connect will become absolutely, ridiculously amazing once write capabilities (still in Pilot) become GA, along with support for Process Builder, validation rules and Apex triggers.

HYPERLINK function in a formula field


Why do I even bother mentioning this? I think simply that the cheapest, crudest means of "integrating" two systems should not be overlooked as an option. Time is money, and if an external system supports deep linking or can process redirects to specific records, using a formula field to dynamically present a clickable URL to a user can be a really quick win.

Monday, May 25, 2015

Force.com Query Optimizer FAQ (cont'd)

The webinar, Inside the Force.com Query Optimizer delivered in April, 2013 is still featured as a relevant resource on the Architect Core Resources page. While the webinar explains many key considerations for designing queries, many questions linger. Let's take a closer look at these open questions, and let me know if you have any of your own to add!

Why does query optimization affect me as an admin? I don't write code.


If I were a betting man, I would bet that under the hood, SOQL queries, reports, list views and related lists on detail pages all tap into the same query execution framework. So, if you manage page layouts, reports and/or list views, you should care about the Force.com Query Optimizer.

What is a selective query?


A selective query is a query that leverages indexes in filters to avoid full table scans and to reduce the number of records in your result set below the selectivity threshold.

Stupid question, but what is an index?


This is a great question. Simply put, an index is a field-based mechanism by which a query can execute significantly faster, compared to execution without the index. Salesforce technical architects don't need to know how an index actually works behind the scenes, much like office workers don't need to know how a Keurig machine makes coffee at the press of a button.

A technical architect probably just needs to know that fields are either indexed or not indexed, and that indexed fields should be used in query filters. The technical architect should probably also know off-hand what standard indexes are available, and that custom indexes have lower selectivity thresholds and can only be created by Salesforce Support.

Anyone hardcore enough to dig into the Oracle database-level index machinery may want to check out the Database Systems course, offered gratis through MIT OpenCourseWare.

What's considered a standard index?


Great question! I wasn't able to find concrete documentation on this question, so you tell me (on Twitter)! I think an official answer would be a welcome addition to the "Force.com Query Optimizer FAQ" article. In the meantime, the list of indexed standard fields can be considered to all be standard indexes.

What is the selectivity threshold? And why should I care?


The selectivity threshold is the maximum number of records that can be returned in a result set, without disqualifying the index-based optimization option for a query. See the Query & Search Optimization Cheat Sheet for the exact calculations used with standard indexes and custom indexes.

Is there a difference between using nested queries vs. separate queries?


This was the first question asked on the Inside the Force.com Query Optimizer webinar during Q&A, "Is using nested queries good practice?" But I don't think the answer fully addressed the question. My personal guess (which needs to be validated in an org that actually contains large data volumes) is that nested queries have negligible impact on the execution time of a query, as long as the queries are not constructed in a way that uses the NOT operator.

The basis for my conjecture is a best guess that nested queries are executed sequentially, following an order of operations that allows the result set from one query to be used in another query. How true is this? I suppose I'll need to test all of the following query structures with selective filters applied:
More importantly, however, it's worth noting that in some instances executing separate queries in Apex is unfeasible, and the only viable alternative is to use nested queries. I could be wrong (and please correct me if I am), but one such situation could be an Apex method that needs to iterate through a result set using a for loop, processing child records for each record returned.

BGP Routing for Salesforce Technical Architects

The "Network Best Practices for Salesforce Architects" page mentions optimizing BGP routing as a means to improve network latency and thereby improve Salesforce performance. But, what exactly is BGP routing?

I found a "Networking 101: Understanding BGP Routing" primer which I thought gave a pretty good 10,000-foot overview of BGP, which stands for Border Gateway Protocol. And from the primer I took away two key points:

  • BGP is primarily used to route information among ISPs or among a large enterprise and its multiple ISPs. As the primer says, "If you are the administrator of a small corporate network, or an end user, then you probably don't need to know about BGP."
  • A Salesforce technical architect should know about BGP and its purpose. Any real work to optimize BGP should be left to the professionals. Just like how a Salesforce solution architect should know about Apex without knowing how to write Apex, a Salesforce technical architect should know about BGP without knowing how to configure or optimize BGP.


In short, Salesforce latency issues can be a pain point for large enterprises that spans multiple regions across the globe. If the internal network has already been reviewed and optimized to the best of the organization's capabilities, a Salesforce technical architect should acknowledge the problem and advocate for engaging seasoned networking professionals.

Sunday, May 17, 2015

Exchange Sync [sic] vs. Salesforce for Outlook

There's an intentional mistake in this Summer '15 post's title: The true comparison is between Email Connect and Salesforce for Outlook ("SFO"), not between Exchange Sync and SFO. If you are currently confused by Email Connect vs. Exchange Sync like I used to be, read on.

Basically, there are two key capabilities desired in any Salesforce-Outlook integration:
  • Record synchronization. Contacts, events (i.e., appointments and meetings), and tasks should be synchronized between a user's Salesforce experience and his Outlook experience.
  • Easy access to CRM data. A user working within Outlook should be able to easily look up records in Salesforce and to attach emails to those records.

Email Connect and Salesforce for Outlook take different approaches to implementing those two capabilities.

Email Connect Salesforce for Outlook
Record Sync Exchange Sync, which uses a service account on Exchange Server and simple configuration in Salesforce to enable fast, auto-magic synchronization to and from any app or device that connects to Exchange Server Salesforce for Outlook service, which must be running in the background on a user's local machine. This means that as soon as the machine is shut down or put to sleep, sync stops completely.
CRM Data Access Salesforce App for Outlook, which produces an interactive side panel in Outlook, using the new apps for Office platform. Using the apps for Office platform means that this app can look and feel native, without a user needing to actually download or install an add-in on his local machine. Salesforce Side Panel add-in, which is an add-in that must be installed on a user's local machine and and then enabled in Outlook
Software? No! Yes, Salesforce for Outlook

While the Salesforce App for Outlook is still being developed to meet and then exceed the capabilities of the currently GA Salesforce for Outlook add-in, admins can actually get the best of both worlds by mixing the two solutions. An admin can implement Exchange Sync (Beta) and still use the SFO add-in, with the SFO sync functionality turned off. This possibility is critically important for organizations that are using an older version of Exchange Server that doesn't support apps for Office.

In short, Email Connect is the true alternative to Salesforce for Outlook. And the long-term vision for Email Connect seems to be a no-software solution that not only syncs contacts, events and tasks, but also gives users an interactive side panel for easily working with Salesforce data directly within Outlook.

Incredibly, "no software" implies that all this to be achieved without the user having to install a single piece of software. Boom!

Tuesday, May 5, 2015

4 Warning Signs You’re Spamming, Not Sharing

A poll recently appeared about whether sharing blog posts to multiple groups within the Salesforce Success Community constitutes spam. Behind that poll was a lengthy conversation and another poll, where opinions seemed to be split 50-50 between "yes, cross-post freely" and "no, keep it contained in an opt-in channel". So, if I write a piece about best practice for regression testing in Salesforce, and in my excitement I want to share that piece with others on the Success Community, am I sharing or am I spamming? (For the record, the piece I mentioned is hypothetical. I haven't written any such piece to date.)

There's a fine line between sharing for the good of a community and spamming the community in self-promotion (intentional or not). And as a professional consultant, I try very hard to walk that fine line without crossing into spam territory.

The 4 warnings signs below highlight my views on what's spam and what's not.

1. Your post promotes a product or service


When starting a new thread of discussion with a new post, consider whether the content you're sharing mentions a specific paid app from the AppExchange or highlights the success of a specific consulting company. If so, chances are people are going to feel like you're spamming.

I do want to highlight a difference between posting new (unsolicited) instead of responding to a question or an explicitly expressed need (solicited). In the latter case, you could actually be helping by pointing the other person to a solution, paid or otherwise, that really addresses the need.

Consultant's note: Sometimes I come across a question on the community that seems like a lead for a new project at a prospective client. Personally I do still take time to answer the question in as much detail as I think is feasible, so that the poster has enough guidance to either implement the solution on his own or to contract a partner to help. Posting bluntly, "you need a partner to help" would be inappropriate.

2. You blindly cross-post your blog across multiple groups


With Chatter, it's super easy to share a single post across multiple groups, either by sharing (i.e., reposting) the original post or by mentioning other groups in a comment. If you're simply cross-posting without assessing and repackaging the content to make it relevant to each new audience, chances are you're spamming.

Consider for example a notification about an MVP Office Hours conference call, offered free of charge to the community. People share this information with their local user groups, and generally speaking no one complains about this being spam, because the resource is relevant and targeted to users, who are naturally members of user groups.

But is the same notification appropriate for an official group that's created specifically for a feature or different topic of interest, such as the Salesforce1 group or the Communities Implementation group? Now the cross-post is starting to smell like spam, unless additional context can be provided to explain that there will be a dedicated segment to mobile apps or Communities.

3. Your post has no clear relevance to the audience


Take a look at the charter or description of a group or any channel, and assess whether your post has relevance and value to add to that group. Obviously if the value is unclear or loosely associated, your post is probably spam. But I think it is possible to tweak or augment a post with additional information to make it relevant to a new audience.

Let's use Lightning Process Builder as an example in the context of Salesforce. A generic statement that adds unclear value could be, "Process Builder is the future of declarative automation on the Salesforce1 Platform." Posting this as-is to a Sales Effectiveness group is probably a bad idea. But adding additional context could make the post relevant, such as, "Here's an example of a process that intelligently populates the Next Step field with a recommended action based on field values on the opportunity and account."

4. You have no prior relationship with the audience


This one should be a no-brainer. Did you join a group or cross-post to a group just to share a piece of content? Even if you're doing it with the best of intentions, your action could be seen as spam. This is a human consideration, not necessarily a technicality.

Closing thoughts


Use common sense. None of the signs above are hard and fast rules. But instead, think of them as factors that comprise a "contributor score" similar to a Sender Score. Reputations are difficult to build and easy to ruin, so don't lie to yourself about what you're trying to do.

Treat your audience as real people, and imagine you were telling each person individually about whatever content you're about to share. If you think the vast majority of people will be appreciative, go for it. If not, you should probably err on the side of caution.

P.S. At the risk of being seen as promoting religion, I also want to share Matthew 18:15-17. This is 5 sentences well written about conflict resolution that applies to resolving differences of opinion about what is and isn't spam.

Saturday, May 2, 2015

Alternative to Force.com Integrations (DEV-502) in CTA Study Guide

The Salesforce CTA Study Guide (Spring '15) recommends completing the "Force.com Integrations (DEV502)" course in preparation for the exam. While the name of the course has changed to "Integrating with Force.com (DEV-502)", the bigger problem is that the course is only offered as a $3,400, 4-day instructor-led course, which I don't have the luxury of taking due to project commitments.

Furthermore, the description of the course reads, "Learn to design and build all types of integrations with Force.com. The first day introduces the major integration methods and demonstrates techniques for using those methods. In the remaining days, you’ll explore the specifics of the major technologies that play a role in integration, including the Force.com Web services API, sites, and portals." My concern is that the description seems outdated. To name a few possible gaps:

  • Portals have been replaced with Communities
  • SSO with SAML 2.0 and OAuth are now possible
  • Canvas enables external apps to be folded in
  • Lightning Connect currently allows reading from (and eventually writing to) external databases as if they were native objects

Searching the training catalog instead for online courses matching the keyword "integrations" returned 37 results. While the resulting list of relevant courses is long, given the goal is to replace a 4-day course I think the following online courses make a good substitute for DEV-502:
  • Integrating with Force.com: An Overview
  • Technical Architect: Force.com Integration Basics
  • Integrating with Outbound Messaging
  • Integrating with Salesforce to Salesforce
  • Integrating with Apex
  • Integrating with the SOAP API
  • Large Data Volumes
  • Integrating with the Force.com Bulk API
  • Introduction to the REST API
  • Integrating with the Force.com and Chatter REST APIs
  • Integrating with the Force.com Streaming API
  • Integrating with Force.com: Security
  • Integrating with Force.com: Single Sign-On
  • Writing Secure Applications on Force.com
  • Security Tips and Tricks
  • Integrating with Force.com Using Mashups and Canvas

And a bonus module would be to complete the official Salesforce1 Lightning Connect Tutorial on GitHub.

Saturday, April 25, 2015

Recursive Process with Autolaunched Flow

In the old days of Salesforce, if a workflow field update occurred during a DML operation, the before and after triggers would for that object would execute a second time. But in the new world of Process Builder with autolaunched flows, how does the order of operations play out?

The Spring '15 documentation seems to imply that processes only fire once. The reason I say this is there is no corresponding explanation for processes in the same vein as the explicit explanation (shown below) for how a workflow rule can cause triggers to fire again.


As with many things in Salesforce, I like to trust and verify. And I found that Process Builder processes actually can cause triggers and the process itself to execute up to five (5) more times. I couldn't find this documented anywhere, but that seems to be the de facto limit on process recursion.

The proof


To prove this to myself, I decided to stage a manual test based on the following constructs:
  • A custom object named TriggerEvent__c exists as a child to the Account object
  • An Account trigger exists to create a TriggerEvent__c record before update
  • A process exists that launches a flow, which increments the Num Employees value by one if Num Employees is not blank

Under this setup, when I updated an account and entered 100 for Num Employees, I actually ended up with 106 for Num Employees and six new child TriggerEvent__c records.

update vs. undelete in Apex Triggers

Based on the Implementing Triggers training module, there are only four DML operations for which triggers can be executed: insert, update, delete and undelete. But, when an undelete operation occurs, does that also count as an update operation, whereby the sObject.IsDeleted value is toggled?

To settle the matter for my own benefit, I create an Apex test to validate my assumptions. What I learned and confirmed were the following:

  • Only one of Trigger.isUpdate, Trigger.isDelete and Trigger.isUndelete will ever be true during trigger execution. This means that the three operations are indeed distinct and constitute different trigger contexts.
  • The ALL ROWS keyword is required to retrieve soft-deleted records that are in the Recycle Bin

Below are the trigger and its test class I used.

AccountTrigger.cls



AccountTriggerTest.cls


Monday, April 20, 2015

Salesforce Limits All Developers Should Know

As I was going through the Object-Oriented Programming in Apex training module, the section on "Limit Methods" reminded me that out of the myriad limits detailed in Salesforce Limits Quick Reference Guide, there are some that are more important than others. Which ones? How about the ones that have their own Limits methods...


To that end I went through the exercise myself and compiled a list of the Limits methods and what the respective limits are, using the helpful resources below.


This was a great learning experience for me, as I discovered many old limits have been deprecated, especially the ones previously affecting describe methods! It was also interesting to note that there is no limit on callouts per 24-hour period, or at least none that I could find.

Surprises with and without sharing in Apex

I'll admit that I never dug too deep into the with sharing and without sharing keywords. At a high level I felt that if I want to enforce security and visibility, I should use with sharing. Otherwise I should use without sharing, which I also assumed was the default.

The Object-Oriented Programming in Apex training module surprised me by telling me that CRUD permissions and field-level security ("FLS") are ignored both with sharing and without sharing! Also, system mode simply means that record-level read and edit privileges are ignored, since CRUD and FLS are always ignored.

This is contrary to what I'd inferred from the Apex Code Developer's Guide, which writes, "In system context, Apex code has access to all objects and fields— object permissions, field-level security, sharing rules aren’t applied for the current user." My interpretation of this statement was that on the flip side, when with sharing is applied, "object permissions, field-level security, sharing rules" would all be applied for the current user.

It was a bit hard for me to believe that Apex would not respect CRUD (if not FLS) permissions, so I created a Visualforce page to test this in my org. And it seemed to me that indeed, an Apex controller created using the with sharing keyword would allow a user without the Delete object permission to delete a record. Crazy... am I missing something?

Well! What's more interesting was that with a standard controller, CRUD permissions are always observed with actions like delete(), regardless of whether an extension class is defined with sharing or without sharing. For example, a button that invoked the StandardController.delete() action would automatically be hidden if a user didn't have the Delete object permission. Furthermore, if a custom action in the extension class invoked the standard controller's action, the custom action would also be subject to the user's CRUD permission, generating an "insufficient privileges" error.

So, I guess the way to enforce CRUD is to use standard controllers, which I don't think is always feasible, especially with mass actions.

Sunday, April 19, 2015

Alternative to DEV-501 Modules in CTA Study Guide

When planning my CTA studies, I didn't realize that the "Apex" and "Visualforce Controllers" modules mentioned in the Study Guide are actually courses in Spring '15, not modules. The Apex course is 331 mins, and the Visualforce Controllers course is 172 mins. Together that's over 8 hours of material to cover.

So, instead of going through both courses in full, since there seem to be several basic modules that I'm going to guess that really won't be useful for me, I'm going to revise my plans to cover just a few modules from each course.

From the Apex course:

  • Data Types and Logic
  • Object-Oriented Programming in Apex
  • Implementing Triggers
  • Working with Web Services
  • Receiving and Sending Emails Through Apex
  • Advanced Topics


From the Visualforce Controllers course:

  • Visualforce Controller Extensions and Custom Controllers
  • Further Visualforce Controller Topics


Another problem with the study guide is that there is no Managing Development Lifecycle module, or at least not one that I could find in the training catalog. In this cases I'm going to replace this "module" with simply reading through salesforce.com's official Development Lifecycle Guide.

Friday, April 17, 2015

Exchange Sync (Beta) Resources in Spring '15

It's been a bit hard to find all of the official resources on Exchange Sync (Beta), so I've put together a list of the resources below that I've encountered. Please add comments to share any more that you've found.

Wednesday, April 15, 2015

Salesforce Security & Visibility Design Considerations

I felt compelled to create a Salesforce Security & Visibility Design Considerations matrix to synthesize and expand on the information presented in the Design Considerations training module, part of the Building Applications with Force.com - Part 1 course. We already have a similar template internally in my company, but creating one from scratch and adding to it on my own feels like a good way to internalize the knowledge.

The matrix focuses on two aspects: object security and record visibility. Specifically, who has permissions to do what with each object, and who can see what records in each object.

To control the "doing", the most straightforward way is to use object permissions with profiles and permission sets. However, it is conceivable that less admin-friendly and user-friendly options can be used to prevent certain operations. Below are a couple of examples:

  • A validation rule can be used to prevent editing closed opportunities
  • A Process Builder process could fire an autolaunched flow that reaches a fault condition, causing the create or edit operation to fail. Although I doubt anyone would do this right now, not only because it's cumbersome to implement but also because the error presentation in Spring '15 is not very pretty.
  • An Apex trigger can be used to add errors to a record, thereby preventing the create, edit or delete operation

To control the "seeing" of records, the only practical means are to use the role hierarchy, teams (where applicable) and sharing rules. However, I do want to point out one technically possible way to restrict visibility in the UI, and that is to use Visualforce page overrides for standard actions. A Visualforce page could use the init action to route the user to a different page based on record and user criteria. While in theory the approach would work, in practice the override would fall flat because of incompatibility with Salesforce1 and because users can create reports or list views to see the "hidden" records.

Alternative to "Designing Applications" Module in CTA Study Guide

The CTA Study Guide recommends the following three DEV-401 modules: Designing Applications, Data Management, and Enhancing the User Interface Using Visualforce. However, upon checking the training catalog I couldn't find an online module titled "Designing Applications".

Searching for "designing applications", the catalog returned two relevant courses: Building Applications with Force.com - Part 1 & Part 2. So, based on the descriptions I figured the following modules would make a more complete set to start my training:

  • Design Considerations. This module introduces the business requirements that an organization might have when setting up security and access.
  • Managing Your Users' Experience. Learn how to set up users with appropriate permissions. See how licenses and profiles dictate a user's access to an application.
  • Controlling Access to Records. Examine different ways in which users receive access to records: through ownership, organization wide defaults, roles and sharing rules.
  • Designing Data Access Security. This module provides a summary of security and access features. Through a number of business scenarios, students will have the opportunity to apply the knowledge that they have gained about determining user access.
  • Data Management. Learn the basics of data management including record IDs, external IDs, and object relationships.
  • Enhancing the User Interface Using Visualforce. Discover what Visualforce is and its features and capabilities. Learn the basics of Visualforce syntax and how Visualforce components are similar to HTML and XML. Learn how to create Visualforce pages and add them into an application.
  • Additional Uses for Visualforce. Continue to explore using Visualforce pages to change the look and feel of a page, to display Salesforce data on a website using AJAX and JavaScript in conjunction with Visualforce, and to develop Visualforce pages for mobile devices.

Personalized Schedule for My 30-Day Action Plan

Yesterday I created The CTA Review Board Candidate's 30-Day Action Plan. But when I looked ahead at my work commitments and personal events, I realized that the plan is useless without the context of my own life. So, I created a daily schedule and used some formulas and conditional formatting to give better form to my plan.


Now that I'm comfortable knowing I won't burn myself out or neglect my personal relationships, I'm ready to dig into Day 1 (at a really late hour, I know).

Monday, April 13, 2015

The CTA Review Board Candidate's 30-Day Action Plan

Having already passed salesforce.com's Certified Technical Architect Multiple-Choice Exam, and having then failed my first attempt at the Review Board Presentation, I'm now faced with a sobering reality: My second (and likely final) attempt at the TA certification is less than 60 days away.

Complicated by the following factors, I asked myself, "Can I really do this?"
  • I'm fully billable for 40 hrs/wk between now and then on an important project where my role is the Salesforce technical architect
  • I'm the happy father of a 7-month-old, and there's no one at home to support my wife except me, since our closest parent is about 1,000 miles away
  • I'm paying out of pocket to take an MBA class right now that I cannot afford to fail

The reality seems to be, "Do I really have a choice?" It's now or never, sink or swim. So, to help center myself and to aid any others who're about to embark on a similar journey, I'm going to lay out what I'm going to do in the next 30 calendar days, dubbing it, The CTA Review Board Candidate's 30-Day Action Plan.

A few notes on terminology used in the plan:

  • A "comprehensive scenario" includes use cases for Sales Cloud, Service Cloud, Community Cloud, Chatter, the Salesforce1 Platform, the Salesforce1 mobile app and Heroku
  • "Large data volumes" means that at least one object holds over 1 million records
  • A "technical work stream" involves making changes to Salesforce itself or to an integration job or migration effort

Luckily for me, in Spring '15 the requirement for a customer case study is no more! This change implies extra scrutiny on the Hypothetical Scenario Exam, Presentation, and Discussion, but I feel the odds for me personally just improved ever so slightly.

All right, then. Let's do this!

Wednesday, April 1, 2015

Events Created By Apex Respects DST

For my own edification, I wanted to confirm that Apex in Salesforce is capable of automatically adjusting for DST based on the user's local time zone.

The scenario: As a user in the America/New_York time zone, when I create an event using Apex for July 4, 2015 (EDT) at 9:00 AM and another event for December 25, 2015 (EST) at 9:00 AM, I expect the following:

  • Both events should appear in the UI as starting at 9:00 AM on my calendar
  • The UTC start time for the July 4 event should be 13:00
  • The UTC start time for the December 25 event should be 14:00, which accounts for the end of Daylight Savings Time

The following code confirms the expected behavior:

Thursday, February 12, 2015

Queueable vs. @future throwdown!

At first blush, the new Queueable interface appears to supersede the old @future annotation in Apex, especially now that in Spring '15 you can chain a job to another job an unlimited number of times. Yes, that's right: unlimited.

So, what's the purpose of @future in this new age of Apex?

Let's start by comparing the well known @future limits with Queuable.

@future consideration vs. Queueable
Some governor limits are higher, such as SOQL query limits and heap size limits Some governor limits are higher than for synchronous Apex, such as heap size limits
Methods with the future annotation must be static methods Queueable implementations must be instantiated as objects before the the execute() instance method is called, leaving room for additional job context
Methods with the future annotation can only return a void type Queueable classes must implement public void execute(QueueableContext), which is how a job is initiated
The specified parameters must be primitive data types, arrays of primitive data types, or collections of primitive data types. Methods with the future annotation cannot take sObjects or objects as arguments. A Queueable object can be constructed with any type of parameter, stored as private member variables
Can make a callout to an external service
A future method can’t invoke another future method You can chain queueable jobs. You can add only one job from an executing job, which means that only one child job can exist for each parent job.
No more than 50 method calls per Apex invocation You can add up to 50 jobs to the queue with System.enqueueJob in a single transaction
The maximum number of future method invocations per a 24-hour period is 250,000 or the number of user licenses in your organization multiplied by 200, whichever is greater. This is an organization-wide limit and is shared with all asynchronous Apex: Batch Apex, Queueable Apex, scheduled Apex, and future methods. The licenses that count toward this limit are full Salesforce user licenses or Force.com App Subscription user licenses.

From the reverse side, what about known limits with Queuable?

Queueable consideration vs. @future
The execution of a queued job counts once against the shared limit for asynchronous Apex method executions.
You can add up to 50 jobs to the queue with System.enqueueJob in a single transaction. No more than 50 method calls per Apex invocation
No limit (except in DE and Trialforce orgs) is enforced on the depth of chained jobs, which means that you can chain one job to another job and repeat this process with each new child job to link it to a new child job. n/a (cannot chain @future methods)
When chaining jobs, you can add only one job from an executing job with System.enqueueJob, which means that only one child job can exist for each parent queueable job. Starting multiple child jobs from the same queueable job isn’t supported.
You can’t chain queueable jobs in an Apex test. Doing so results in an error. To avoid getting an error, you can check if Apex is running in test context by calling Test.isRunningTest() before chaining jobs.

The verdict: Implement Queueable as a standard approach, and only look to @future if for some reason Queueable gives you unexpected or undocumented problems.