Thursday, December 13, 2018

odata webapi 2.x on core, function parameters

odata webapi 2.x on core, function parameters

When I needed to add some functions (or actions) to my odata server for my internal database that I want to connect to Dynamics CRM, I found the documentation challenging. Even blogs on this topic did not help much. I had alot of errors.

When setting up a function in your EDM model you can following the instructions. In my domain model, I have a SystemUser and PrincipalPermissions which are a set of permissions needed to determine if you can CRUD an entity or a collection of entities. Nothing special here and I am using the same names as those found in Dynamics CRM for convenience here but just remember, this is my separate database that I need an OData front end for:

var systemUserES = builder.EntitySet<SystemUser>("SystemUsers");
var retrievePrincipalAccessFunc = builder.EntityType<SystemUser>()
	.Function("RetrievePrincipalAccess")
	.ReturnsFromEntitySet<PrincipalPermission>("PrincipalPermissions"); // return single instance
retrievePrincipalAccessFunc.Parameter<string>("type");
retrievePrincipalAccessFunc.Parameter<Guid>("id").Optional();

Here’s the final controller declaration:

[HttpGet]
[ODataRoute("SystemUsers({key})/Encore.RetrievePrincipalAccess(type={otype},id={oid})")]
public  ActionResult<PrincipalPermission> RetrievePrincipalAccess(
	Guid  key,
	string  otype,
	Guid? oid) { ... }

Note the following which I did not find documented:

  • The function parameter names in the EDM spec are for the actual parameters in the function call and not the parameter names in {…} for the actual C# function.
  • The names in the {…} need to match, order does not matter.
  • The GUID in the function call itself (in the HTTP call) should not be quoted even though its a function argument. The type is GUID and GUIDs should not be quoted in the HTTP URL.
  • The parameters in the parameter list do not have [FromODataUri]. If you keep those on the parameters, your route will throw an exception during startup indicating the route is invalid. If you did nothave an ODataRoute then you would want [FromODataUri] on the parameters. The explicit ODataRoute removes the need for FromODataUri.

I worked through this fast enough, but I do find that things like this slow me down a bit.

At least its all working!

Sunday, October 21, 2018

faking a view in an odata data source

faking a view in an odata data source, all using .net core
The current Dynamcis Customer Experience virtual entity capability requires an odata source. So I was fine creating a Virtual Entity but my client also needed the ability to efficiently grab a a set of data that require a few joins across the source database. What to do?
Creating an odata source is easy. I needed to do this and followed the instructions for using the odata server for aspnetcore 2.1 and entityframeworkcore:
Since this was all dotnet core, I was using my linux laptop and visual studio code as my editor.
It was all straight forward but I ran into an issue where I needed to efficiently pull some “people” data from this virtual entity but the entity framework queries were not efficient. What I really wanted to do was pull a very specific query, joined to two other tables that returned exactly the data that I needed using the odata protocol. While you could easily run a query in entity framework that returns exactly the data you need using FromQuery and a Select statement, having to use Include in my entityframework query forced my query to pull too much data from the database and my odata server needed to be small so it could run on a cheap azure appservice plan.
It was clear that what I really needed was a simple way to pull from a view. EFCore supports this but there was one problem, there was no database view defined for what I wanted. Ugh! I needed to fake a view.
I really needed a new entity that would like a real entity to serialize in the OData layer but not be mapped in the EFCore layer and I needed to mimic a “full text search” type of pattern matching capability. That’s actually simple to do using a [NotMapped] attribute on my class and a regex. I needed this “view” to filter itself using more advanced filters than a contains method in Linq hence the need for regexs This allows the user to use wildcards.
I knew this “search” requirement would force computation to occur outside the database and on my server but for the data that I was working with this iterable approach was efficient enough for me on my tiny azure server. I was not able to use azure’s scalable search solution since that would mean I would need to setup a process to scan the database and push search data to azure–something that was too much for what I needed. Linq knows when you are using functions that it cannot push down optimize and directs the processing to pull the table from the database and process the rest of the functions locally. All of this was Ok for what I needed.
So here’s the key parts:
The object that will be serialized in the OData framework back to my client:
[NotMapped]
public  class  PeopleHomeView
{
 [Key]
 public  Guid  Id { get; set; }
 public  String  LastName { get; set; }
 public  String  FirstName { get; set; }
 public  DateTime? CreatedOn { get; set; }
 public  String  Nickname { get; set; }
 public  Guid? CompanyId { get; set; }
 public  String  CompanyName { get; set; }
}
Here’s the registration of a function on this entity during the EDM model builder callback. You have to use a function because the convention oriented approach and odata specification says that you cannot really add parameters to what essentially is a “get” request. Check out the conventions as described in the OData link above and review the routing section. So we need to use a “function”. Using a function means that the call will be /PeopleHomeView/NS.Find(text='Some%Name, orderBy=‘CreatedOn’)/$top=1000`
private static IEdmModel GetEdmModel() {
 ODataConventionModelBuilder builder = new ODataConventionModelBuilder();
 builder.Namespace  =  "NS";
 //...
 var  phvb  =  builder.EntitySet<PeopleHomeView>("PeopleHomeViews");
 var  phvFunc  =  phvb
  .EntityType.Collection
  .Function("Find")
  .ReturnsFromEntitySet<PeopleHomeView>("PeopleHomeViews");
 phvFunc.Parameter<string>("text");
 phvFunc.Parameter<string>("orderBy");
 return builder.GetEdmModel();
}
The parameter text holds the filter criteria and orderBy is the criteria that I use to sort. I need to tell the view how to sort because if I use $top (which I do to limit results and avoid pulling the entire dataset) then I need to sort before the top is applied.
Normally, EF scans the entity and merges metadata from the C# class, any other configuration data provided, say through model builder provided in OnModelCreating, or any other approach you have to configuring EntityFramework. Previously, in the non-core version of EF, it would then run a “model check” against the database to see if an update should occur. EFCore does not do the check anymore so the fact that the entity does not exist in the source database does not trigger an error or the desire to run a database migration. Since our entity is not mapped, it does not try to do anything with it.
The controller is a bit more complex because we have some “search” requirements to meet:
[EnableQuery(MaxTop = 5000)]
public class PeopleHomeViewsController: ODataController {
 EC _db;
 public PeopleHomeViewsController(EC context) => _db = context;
 protected static String viewQuery = @ "
 select tp.guid as Id, LastName, FirstName, Nickname,
  tp.TimeEntered,
  tc.companyname, tc.guid as companyguid
 from tpeople tp
 left outer join texperience te on te.guid = tp.currentexperienceguid
 left outer join tcompany tc on tc.guid = te.companyguid ";

 [HttpGet]
 public IOrderedQueryable < PeopleHomeView > Find(
  [FromODataUri] string text, [FromODataUri] string orderBy) {
  if (orderBy == null || orderBy.Length == 0)
   orderBy = "CreatedOn DESC";
  if (text == null || text.Length == 0)
   return
  _db.PeopleHomeView.FromSql(viewQuery)
   .OrderBy(orderBy);
  var r = Utils.LikeToRegular(text);
  return _db.PeopleHomeView
   .FromSql(viewQuery)
   .Where(x => Regex.IsMatch(x.LastName ? ? "", r) ||
    Regex.IsMatch(x.FirstName ? ? "", r, RegexOptions.IgnoreCase) ||
    Regex.IsMatch(x.Nickname ? ? "", r, RegexOptions.IgnoreCase) ||
    Regex.IsMatch($ "{x.FirstName} {x.LastName}", r,
     RegexOptions.IgnoreCase))
   .OrderBy(orderBy);
 }
}
public static class Utils {
 public static String LikeToRegular(String value) {
  return "^" + Regex.Escape(value)
   .Replace("_", ".")
   .Replace("%", ".*") + "$";
 }
}
The controller runs a sql query, very efficiently, the continues with building an IQueryable return value. We want an IQueryable return value because the OData framework can further process the results based on odata filter query parameters such as $top.
Our controller does what we expected though, which is, it pulls the entire table and processes the attributes, hardcoded, that represent what we want to search on. In this case, the requirement was to search on the last, first, nick and full name of the person to limit results. The code supports empty orderBys and if no filter criteria is provided, it returns the entire dataset. By providing LikeToRegular we translate a % or _ symbol into regex language.
Now I could provide the standard odata methods needed by my Virtual Entity but where I needed better select performance, use a function that allowed me to get exactly what I wanted much more efficiently.
That’s it!

Sunday, June 3, 2018

dynamics crm, azure functionapps and scala

dynamics crm, azure functionapps and scala

I had a need to parse documents and create a “text” version of their content for inserting into a dynamics multi-line text attribute. The idea was that the content would be used for search as well as “quick reading.”

Java’s tika library is highly regarded for multi-document parsing, of many types, not just Microsoft Office document but also PDFs etc.

Since the functionality to parse a document and update an entity’s attribute is on a form, we have a few choices to do this:

  • Create a web site that we hit with a dynamics crm webhook
  • Create a web server that allows us to call REST calls and extract the text.
  • Don’t use java and write a plugin to do this.
  • Use MS flow with some type of text extraction service.
  • Use some other existing plugin/appsource solution.

Dynamics is clearly following the trend of the MS cloud to pull processing out of the server and put it into microservices. Significant innovation is occurring in the azure cloud vs data integration on the dynamics platform. Dynamics will catch up at some point, but for now its better to pull things off platform if its easy.

It is easy (and secure) to use to provide a REST service callable from a form’s web page through javascript. So let’s do something easy (I spent a few hours on this) and use azure functionapps.

An azure functionapp is a single function you provide on a pay-as-you-go, per call basis to run a function. That function itself can pretty much do anything as long as it runs within its governor limits (see the functionapp documentation for governor limits).

In our case, our function should just parse the document provided to it in the REST payload and return the parsed text. Since its functionapps on azure, it is easily secured and accessed from any javascript application we want. We will not cover security in this post, but its easy to adjust the CORS policy and use a javascript dynamics web resource to secure the web service. We could also add a token based header/query parameter. This makes functionapps ideal for accessing off platform processing while still securing access credentials. For example, if you want to access the MS graph from within Dynamics, you can make the function call using functionapps vs using a plugin with embedded security parameters (or some other standard tricks). Functionapp security credentials are protected behind CORS, security tokens and other more robust data management models.

Here’s a few links on functionapps that’s relevant:

To make this more interesting, instead of using java, we’ll use scala and we’ll create a fat jar for the service since its so easy to create.

The key thing to remember is that a functionapp is really a set of functions grouped together in an “application” service that runs like any other web program. In fact it runs much like the old-school CGI programs on web servers from the 1980s. A call comes in, a function within an application (regardless of the language or execution model) is run and the results are returned to the caller. You can think of a functionapp as a function with some programming code that runs on a "virtual’ server that only runs when the function endpoint is called. It’s called serverless computing because you never allocate a server, its handled for you, all you need to do is organize a hierarchy of files that represent the function, the functions “configuration” and the related programming code.

The functionapps are organized on the “server” like the following directory tree:

  • wwwroot
    • func1: The function to create, you can have multilpe.
      • function.json: The configuration that describes how to run the function e.g. run a .net/nodejs/java rpgoram.
      • blah.jar: In this case, a java jar file that contains the JVM bytecode. If this was a nodejs program we would have a “index.js” type file that contains our function’s code.
    • func2: Another function bundled under the same “functionapp”

Here’s our development checklist:

  • In the azure portal, create a new functionapp called functionappparse.
    • Underneath this functionapp we will create a single function called parse.
    • Create its a javascript/node app. It doesn’t matter what you create as we will swap in the configuration and programming files from our scala project.
  • Set the functionaparse>Application Settings>FUNCTIONS_EXTENSION_VERSION to beta. “beta” is needed to get the 2.x functionapp execution engine that supports the JVM. If you are using javascript or C# you can leave it on the default.
  • Set the CORS to the dynamics servers of interest so that only the CRM servers we care about can access the app.
  • Set the app to only allow ftps (the s!) deployment so that clear text ftp is not allowed.
  • Create a scala app that uses the azure annotations to create our “function” to call.
  • Create the function’s configuration.

There is a maven archetype (project template) that you can use to create your project but I found that it did not work and it’s too hard to use. Instead, we’ll just create a simple scala project that creates a fat jar and push the fat jar using “lftp” (on linux) to copy the function files to the “serverless server.” This is called “ftp” deployment and is a standard “deployment” mechanism for websites. The ftp site uses ftps which is not the same thing as sftp.

Functionapp

Set up a functionapp, by following the azure documentation. Create a function, say called “parse”, as a standard HTTP endpoint function that is called whenever you hit the HTTP endpoint.

Create the functionapp called functionappparse. You will want to also create a function based on javascript/nodejs to create the shell of the function.

You can use the “app services editor” (a separate editor available in the portal under your functionapps settings) to rename, delete or move around files on the “serverless” server. While there is no web server per se, there is filespace allocated for our functionapp files.

The key insight is that each function is just a plain directory under the wwwroot directory. The name of the directory indicates the function name for functionapp management although you can name the actual function called anything you want per the function.json configuration data. Within each directory is the function.json configuration file and then whatever associated “function execution” files are needed–javascript files or jars, etc.

The program execution files can be located anywhere in the overall “functionapp site” as long as the path name to the execution file can be found. You might push a jar file that has several functions to the topmost directory and have the function config parameter point to “…/something-one-level-up.jar”. Just be aware that you may run into execution issues if you are not careful e.g. nodejs node_modules directories.

The app we want is

  • anonymous: No “security code” is needed to call it. An anonymous function does not need a key in the header to be called. Since we are only calling this from dynamics and we use CORS and SSL, we should be fine with an anonymous function.
  • Use a standard mode.

These are all found on the “integrate” config page:
enter image description here

After going through the config for a javascript http endoint, you’ll find that by looking in the App Services editor, you have something like these two pictures:
enter image description here

enter image description here

We will be replacing the javascript function with a java jar file.

Ultimately, our function.json will look like:

{
  "scriptFile": "functionapps-assembly-0.1.0.jar",
  "entryPoint": "ttg.Function.parse",
  "bindings": [
    {
      "type": "httpTrigger",
      "name": "req",
      "direction": "in",
      "authLevel": "anonymous",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "name": "$return",
      "direction": "out"
    }
  ]
}

The bindings map function parameters to request data or output values to functionapp responses. You can also inject other values into the function parameters, for example, the values from an azure table. Reviewing the options available in the portal allow you to identify and obtain the needed json for each type of additional input you may want. Environment variables, a mechanism that is good enough for usernames/passwords, can be placed into the host.json file and accessed through normal java environment access APIs.

The scriptfile indicates the program that will be run by the server “loader” that is specific to each “language/execution” model. In this case its a .jar file so a JVM “loader” will be used to load the jar, find the function name via reflection, and call it when the event occurs. The scriptFile entry indicates the jar is in the same directory as the function.json file, but as long as the path in scriptFile is set correctly, you could locate your functionapp anywhere in the filespace.

You can literally cut and paste this configuration into the function.json file and restart the functionapp using the “restart” button. If you were to run this, it would error out because we have not uploaded the fat jar functionapps-assembly-0.1.0.jar to the server yet. That’s next.

Scala Project

We need to create a scala project that uses tika and adds in the function entry point used by azure functionapps. We want to create an output fat jar called functionapps-assembly-0.1.0.jar that includes everything needed. It will be large and it will be easy to move around (see the deployment section below).

Here’s what we need:

  • build.sbt
  • project/build.properties, project/plugins.sbt
  • functionapps/src/main/scala/parsedoc.scala

That’s all the source we need. We won’t cover all of the contents but show the critical parts. Contact me for a full example (I’ll try to get time to create a sbt g8 template project for it).

Here’s the project definition in our build.sbt:

le circeVersion = "0.9.3" // json processing
lazy val functionapps = project
  .settings(baseCommonSettings)
  .settings(
    .settings(libraryDependencies ++= Seq(
    "io.circe" %% "circe-core",
    "io.circe" %% "circe-generic",
    "io.circe" %% "circe-parser",
    "io.circe" %% "circe-optics",      
  ).map(_ % circeVersion))
  .settings(libraryDependencies ++= Seq(
    ("org.apache.tika" % "tika-core" % "1.18")
      .exclude("commons-logging","commons-logging-api"),
    ("org.apache.tika" % "tika-parsers" % "1.18")
      .exclude("commons-logging","commons-logging-api"),
    ("com.microsoft.azure" % "azure-functions-java-core" % "1.0.0-beta-2")
      .exclude("commons-logging","commons-logging-api"),
  ))
  .settings(
    assemblyMergeStrategy in assembly := {
      case "META-INF/blueprint.handlers" => MergeStrategy.first
      case "META-INF/cxf/bus-extensions.txt" => MergeStrategy.first
      case "mozilla/public-suffix-list.txt" => MergeStrategy.first
      case PathList("org", "apache", "commons", "logging", xs @ _*) => MergeStrategy.first
      case x =>
        val oldStrategy = (assemblyMergeStrategy in assembly).value
        oldStrategy(x)
    }
  )

We have some boilerplate here to help remove the conflicts that exist when we create a fat jar. The issue is that by slamming together a large number of jar files into a single jar we have some conflicts because of some packaging badness on the part of some jars. The merge strategy clauses and excludes helps us work around that.

Here’s the plugins:

// project/plugins.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")

If you use the maven template described in the links above you get a java file that looks like:

package ttg; // for The Trapelo Group, not in the mvn template

import java.util.*;
import com.microsoft.azure.serverless.functions.annotation.*;
import com.microsoft.azure.serverless.functions.*;

/**
 * Azure Functions with HTTP Trigger.
 */
public class Function {
    /**
     * This function listens at endpoint "/api/hello". Two ways to invoke it using "curl" command in bash:
     * 1. curl -d "HTTP Body" {your host}/api/hello
     * 2. curl {your host}/api/hello?name=HTTP%20Query
     */
    @FunctionName("hello")
    public HttpResponseMessage<String> hello(
            @HttpTrigger(name = "req", methods = {"get", "post"}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
            final ExecutionContext context) {
        context.getLogger().info("Java HTTP trigger processed a request.");

        // Parse query parameter
        String query = request.getQueryParameters().get("name");
        String name = request.getBody().orElse(query);

        if (name == null) {
            return request.createResponse(400, "Please pass a name on the query string or in the request body");
        } else {
            return request.createResponse(200, "Hello, " + name);
        }
    }
}

Now we just need to replace the java file with a scala file (in the scala sources directory) that has the entry point and calls the tika parse function.

package ttg

import java.util._
import java.io._
import com.microsoft.azure.serverless.functions.annotation._
import com.microsoft.azure.serverless.functions._

import scala.util.control._
import cats._, cats.data._, cats.implicits._

import org.apache.tika, tika._, tika.metadata._
import _root_.io.circe._, _root_.io.circe.parser.{parse => parsejson}
import _root_.io.circe.optics.JsonPath._
import _root_.io.circe.generic.auto._, _root_.io.circe.syntax._

case class ParseSuccess(
  /** Text with escaped whitespace characters. */
  content: String
)

class Function {

  val tika = new Tika()

  def extract(base64: String, filename: String) = {
    val decodedBytes = Base64.getDecoder().decode(base64)
    val inputStream = new ByteArrayInputStream(decodedBytes)
    val m = new Metadata()
    m.add(TikaMetadataKeys.RESOURCE_NAME_KEY, filename)
    Either.catchNonFatal(tika.parseToString(inputStream, m))
  }
  
  @FunctionName("parse")
  def parse(
    @HttpTrigger(name = "req", methods = Array("get", "post"), authLevel = AuthorizationLevel.ANONYMOUS)
      request: HttpRequestMessage[Optional[String]],
    context: ExecutionContext): HttpResponseMessage[String] = {

    context.getLogger().info("HTTP trigger: Parse a document.")
    val args: ValidatedNel[String, (String, String)] =
      parsejson(request.getBody().orElse("{}")) match {
        case Left(pfailure) => Validated.invalidNel(s"Malformed body: ${pfailure.show}")
        case Right(json) =>
          (
            root.filename.string.getOption(json).toValidNel[String]("No filename field provided."),
            root.content.string.getOption(json).toValidNel[String]("No content field provided.")
          ).mapN{ (f,c) =>  (f,c) }
      }
    context.getLogger().info(s"args: $args")
    args match {
      case Validated.Valid((f, c)) =>
        extract(c, f) match {
          case Right(extracted) =>
            request.createResponse(200, Json.obj("content" -> Json.fromString(extracted)).noSpaces)
          case Left(t) =>
            request.createResponse(422, Json.obj("errors" -> Json.arr(Json.fromString(t.getMessage()))).noSpaces)
        }
      case Validated.Invalid(msgs) =>
        request.createResponse(400, Json.obj("errors" -> Json.fromValues(msgs.toList.map(Json.fromString))).noSpaces)
    }
  }
}

To create the fatjar, run the sbt command functionapps/assembly. The output is a very large, single file at functionapps/target/scala-2.12/functionapps-assembly-0.1.0.jar.

The java annotations are there even though in our case they are unneeded. They are used by the maven azure plugin to generate function.json automatically. However, since we are using sbt and hand writing function.json, they are unused and you could remove them.

Deployment

We deploy the fat jar using lftp on linux. You obtain the userid and password in the publishing settings off the main configuration page for the functionapp.

enter image description here

Once you download it you can look for the ftp settings and look at the file. We have used xml_pp -i on the file to make the downloaded publish settings more readable. (Note: This endpoint no longer exists and I have elided the passwords).

<publishData>
  <publishProfile SQLServerDBConnectionString="" controlPanelLink="http://windows.azture.com" destinationAppUrl="http://functionappparse.azurewebsites.net" hostingProviderForumLink="" msdeploySite="functionappparse" mySQLDBConnectionString="" profileName="functionappparse - Web Deploy" publishMethod="MSDeploy" publishUrl="functionappparse.scm.azurewebsites.net:443" userName="$functionappparse" userPWD="...big long string..." webSystem="WebSites">
    <databases/>
  </publishProfile>
  <publishProfile SQLServerDBConnectionString="" controlPanelLink="http://windows.azure.com" destinationAppUrl="http://functionappparse.azurewebsites.net" ftpPassiveMode="True" hostingProviderForumLink="" mySQLDBConnectionString="" profileName="functionappparse - FTP" publishMethod="FTP" publishUrl="ftp://waws-prod-blx-265.ftp.azurewebsites.windows.net/site/wwwroot" userName="functionappparse\$functionappparse" userPWD="...big long string..." webSystem="WebSites">
    <databases/>
  </publishProfile>
</publishData>

Grab the last username and password since it’s the FTP publish location.

Now all you need to do is use lftp:

> cd functionappparse # go to your scala project directory
> lftp waws-prod-blx-265.ftp.azurewebsites.windows.net
> login functionappparse\$functionappparse ...big long string...
> cd site/wwwroot/parse
> put functionapps/target/scala-2.12/functionapps-assembly-0.1.0.jar

You could also use the azure browser to manipulate the function.json file and upload the fatjar manually.

That’s it!

Test it

You may want to restart the functionapp and then call then service from your javascript. To find the “url” to call access the “</> Get function URL” link found in he function overview page:

enter image description here

The “copy” it to the selection buffer on your computer:
enter image description here

Notice that because we are using an anonymous function, there is no “code” query parameter to use with our CLI curl command or javascript call once we write that function for dynamics.

To test it, you can push a .docx encoded file to the web service using curl or you can use the azure portal.

Let’s use curl in this test example. You can encode a file in linux using the command base64 -w 0 blah.docx | xsel. The xsel sends the contents to the cut buffer so you can cut and paste the encoded file into a payload.json file you create in a text editor:

{
 "filename": "blah.docx",
 "contents": "...base64 encode remember no hardnewlines in the text area allowed..."
}

Then call it using curl. If you navigate to the functionapp and select the specific function, you can copy the URL to call and cut and paste it. You can use any REST tool including something like Postman.

curl -X POST https://..../parse -d @payload.json 

and the result:

{"content":"Text from a word docx.\nThis was parsed!\n"}

If you use the “Run” button in the azure portal functionapp bowser, you get the same result:
enter image description here

Since this is a serverless function, there is not server to log into and obtain the logs. You can obtain the logs by looking at the dashboard in the azure portal. If you get a 404 error when you call your function, you may not have set the platform runtime engine to “beta” vs the default of “~1”. You may need to restart the functionapp if you upload a new .jar file so be aware of your development lifecycle and functionapps caching.

Wednesday, May 2, 2018

customizing Dynamics using React

customizing Dynamics using React

If you customize CRM with React, you know that there alot of moving parts. There have been a few blogs over time on this topic but nothing concentrated. I’ve put together my notes around this topic and curated them into a gitbook. Please take a look. I’m interested in best practices. Of note, the newer parts of the CRM interface are written partly with React and I think that trend will only grow.

Here are some links:

I also posted to the community forum to try and gather more best practices in this area. As far as I know, the library above is the first open source React-based Dynamics CRM library available with a such a wide variety of content in it. A few friends said this would be helpful to them, so I’ve gather them into a mono-repo for convenience.

Monday, March 12, 2018

programmatically setting settings

programmatically setting settings

With Dynamics CRM online, you have never been able to automate a solution deployment fully. With the on-prem version, you can modify the underlying database as needed to accomplish certain tasks but you do not have access to the database in the online version.

If you want to “script” the settings for an org that you just created, you were a bit out of luck. You can export/import a limited set of settings along with your solution, however, not all settings can be set using the solution import/export model.

For example, I have some large web resources that need to be uploaded in a solution. You cannot set export/import the attachment max file size setting, which controls the allowable size of web resources, using the web API or the solution export/import model. This is a well known problem.

How can you automate this?

Use the new Web API in v9.0! The new API allows you to set the settings (except for some settings around categorized search and the preview APIs).

The new web API has an entity called “organization” that contains most, but not all, of the settings.

Use it like any other web API call.

You can always try out the command line tool dynamics-client which allows you set the settings from a file. You can also use powershell but I found it easier to create a json settings file and just have that loaded for me vs writing a script.

Saturday, January 20, 2018

microsoft dynamics crm and electron web app.md

microsoft dynamics crm and electron web app.md

You can easily create a desktop app using javascript, css and html. The electron platform app allows you to create a javascript app, just like you would with the web version of dynamics crm and place it on the desktop. The only real difference is that you are not using crm forms. It’s a custom app.

Here’s a recipe for doing it:

  • create your web app. In my case, I have a “tool” that I use that typically loads in as a dynamics solution. I’m going to modify only a few lines to get it to work with electron. The tool shows plugin trace logs and updates itself using polling to track the latest log.
  • node
  • electron
  • react
  • office-ui-fabric-react

Modify Your App Entry Point

I use the dynamics-client-ui client ui toolkit for creating react based interfaces for dynamics crm. The other code has been touched to include some electron specific init code:

import * as React from "react"
import * as ReactDOM from "react-dom"
import * as PropTypes from "prop-types"
const cx = require("classnames")
const fstyles = require("dynamics-client-ui/lib/Dynamics/flexutilities.css")
const styles = require("./App.css")
import { Fabric } from "office-ui-fabric-react/lib/Fabric"
import { PluginTraceLogViewer } from "./PluginTraceLogViewer"
import { Navigation } from "./Navigation"
import { Dynamics, DynamicsContext } from "dynamics-client-ui/lib/Dynamics/Dynamics"
import { getXrm, getGlobalContext, isElectron } from "dynamics-client-ui/lib/Dynamics/Utils"
import { XRM, Client, mkClientForURL } from "dynamics-client-ui"
import { BUILD, DEBUG, API_POSTFIX, EXEC_ENV } from "BuildSettings"
import "dynamics-client-ui/lib/fabric/ensureIcons"
import { Config, fromConfig } from "dynamics-client-ui/lib/Data"

let config: Config

if (EXEC_ENV === "ELECTRON") {
    console.log("Configuring data access for electron")
    const electron = require("electron")
    const tokenResponse = electron.remote.getGlobal("adalToken")
    const adalConfig = electron.remote.getGlobal("adalConfig")
    if (!tokenResponse || !adalConfig) console.log("Main vars were not passed through correctly. tokenResponse:", tokenResponse, "adalConfig:", adalConfig)
    config = { APIUrl: adalConfig.dataUrl, AccessToken: () => tokenResponse.accessToken }
} else {
    console.log("Configuring data access assuming in-server web")
    config = { APIUrl: getGlobalContext().getClientUrl() }
}

export interface Props {
    className?: string
    config: Config
}

export class App extends React.Component<Props, any> {
    constructor(props, context) {
        super(props, context)
        this.client = fromConfig(props.config)
    }
    private client: Client

    public static contextTypes = {
        ...Dynamics.childContextTypes,
    }

    public render() {
        return (
            <div
                data-ctag="App"
                className={cx(fstyles.flexHorizontal, styles.app, this.props.className)}
            >
                {false && <Navigation className={cx(fstyles.flexNone, styles.nav)} />}
                <PluginTraceLogViewer
                    client={this.client}
                    className={cx(styles.plugin, fstyles.flexAuto)}
                />
            </div>
        )
    }
}

export function run(el: HTMLElement) {
    ReactDOM.render(
        <Fabric>
            <App
                className={cx(styles.topLevel)}
                config={config}
            />
        </Fabric>,
        el)
}

// shim support
if ((BUILD !== "PROD" && typeof runmain !== "undefined" && runmain === true) ||
    EXEC_ENV === "ELECTRON") {
    window.addEventListener("load", () => {
        // @ts-ignore
        run(document.getElementById("container"))
    })
}

You can see that very little code has been touched to adapt to some javascript injected from the main electron process.

My electron start up code is:

const { app, BrowserWindow } = require('electron')
const path = require('path')
const url = require('url')
const fs = require("fs")
const adal = require("adal-node")
const AuthenticationContext = adal.AuthenticationContext

function turnOnLogging() {
    var log = adal.Logging
    log.setLoggingOptions(
        {
            level: log.LOGGING_LEVEL.VERBOSE,
            log: function (level, message, error) {
                console.log(message)
                if (error) {
                    console.log(error)
                }
            }
        })
}
//turnOnLogging()

const argsCmd = process.argv.slice(2);
console.log("ADAL configuration file", argsCmd[0])
const adalConfig = JSON.parse(fs.readFileSync(argsCmd[0]))
global.adalConfig = adalConfig
const authorityHostUrl = adalConfig.authorityHostUrl + "/" + adalConfig.tenant
const context = new AuthenticationContext(authorityHostUrl)
let adalToken = new Promise((res, rej) => {
    context.acquireTokenWithUsernamePassword(adalConfig.acquireTokenResource,
        adalConfig.username,
        adalConfig.password || process.env["DYNAMICS_PASSWORD"],
        adalConfig.applicationId,
        (err, tokenResponse) => {
            if (err) {
                console.log(Error, err)
                global.adalToken = null
                adalToken = null
                rej(err)
            } else {
                console.log("ADAL token response:", tokenResponse)
                adalToken = tokenResponse
                global.adalToken = tokenResponse
                res(adalToken)
            }
        })
})

// refresh with
//context.acquireTokenWithRefreshToken(tokenResponse['refreshToken'], adalConfig.clientId, null, (e, t) => {...})


// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the JavaScript object is garbage collected.
let win

function createWindow() {
    return adalToken.then(tok => {
        // Create the browser window.
        win = new BrowserWindow({ width: 800, height: 600 })

        // and load the index.html of the app.
        win.loadURL(url.format({
            pathname: path.join(__dirname, "dist", "ttg_", "WebUtilities", "App.electron.html"),
            protocol: 'file:',
            slashes: true
        }))

        // Open the DevTools.
        win.webContents.openDevTools()

        // Emitted when the window is closed.
        win.on('closed', () => {
            // Dereference the window object, usually you would store windows
            // in an array if your app supports multi windows, this is the time
            // when you should delete the corresponding element.
            win = null
        })
    })
}

// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
// Some APIs can only be used after this event occurs.
app.on('ready', createWindow)

// Quit when all windows are closed.
app.on('window-all-closed', () => {
    // On macOS it is common for applications and their menu bar
    // to stay active until the user quits explicitly with Cmd + Q
    if (process.platform !== 'darwin') {
        app.quit()
    }
})

app.on('activate', () => {
    // On macOS it's common to re-create a window in the app when the
    // dock icon is clicked and there are no other windows open.
    if (win === null) {
        createWindow()
    }
})

I won’t show all the gooey webpack config code, but the key is to ensure that your target is set to “electron” so that various node_module/electron* lurking modules are found correctly.

The only key thing about this code, which is almost exactly the code found on the electron website with a small tweak for adal authentication, is that very little needs to be done to do something quickly. Obviously, much more effort is needed to make this more usable e.g. configure some menus.

Note the promise setup on the token. If the authentication takes too long compared to the startup of the embedded web browser, you will have a sequencing issue. We use the promise effect to sequence the startup.

I can run this from my dev directory using npx electron . <path to crm adal .json config file>.

I’ll post the entire project to github at some point.