Wednesday, December 20, 2017

msbuild and merging assemblies.md

msbuild and merging assemblies.md

You probably use ilmerge to merge assemblies for uploadding to dynamics.

The easiest way to enable this in visual studio is by adding the msbuild task ilmerge via nuget. nuget is essentially a fancy “macro” that does something to your project. It often just adds assemblies to your references list or adds a config file to manage some tool that also gets installed into your solution.

Once you do, nuget adds the ilmerge.dll and ilmerge.exe to the “packages” directory in your project. Do not forget to exclude these when you check them into github or another version control system. You can find instructions for adding the ilmerge task here: https://github.com/emerbrito/ILMerge-MSBuild-Task/wiki or through nuget directly https://www.nuget.org/packages/MSBuild.ILMerge.Task/.

An issue arises when you host your project files on a network drive. ilmerge will not work in the msbuild task because it is trying to merge assemblies that are on a network drive and .net does not allow this by default. .net has alot of security hassles in general that make life difficult, but this one is easy to overcome.

To overcome this, you need to realize that msbuild loads assemblies to execute tasks. Hence you need to adjust the permissions that msbuild runs with. msbuild is provided in visual studio’s installation directories. Assuming you have the appropriate permission, find the file: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin (use the proper VS version number for you, mine was 15.0) and adjust the MSBuild.exe.config file:

<?xml version="1.0" encoding="utf-8"?>
  <configuration>
    <runtime>
	    <!-- add this line anywhere in the runtime section -->
		 <loadFromRemoteSources enabled="true"/> 
...

ilmerge will now merge assemblies located at remote resources, such as a network drive, correctly. Maybe everyone knows this already, has another workaround, or maybe no one runs their projects on network shares or its already setup for them. I needed to make a modification for my development environment. I did not find a way to modify this permission globally using .net 4.5.

P.S. Don’t forget you still need to add the dynamics 365 SDK via nuget: https://www.nuget.org/packages/Microsoft.CrmSdk.CoreAssemblies/

Monday, December 18, 2017

dynamics customeraddress loading.md

dynamics customeraddress loading.md

Addresses in dynamics are handled as a special entity with different semantics than most other entities in CRM. You are unable to customize an address by creating new relationships and you are unable to use new form control features as the form is still the old one, much like the connections form.

Addresses (the logical name is customeraddress/customeraddresses) are always allocated along with the primary entities they were designed to serve. For example, 2 address records are created for each contact and account with address numbers 1 and 2. Any additional attributes are numbered 3+ when they are created via an autonumbering scheme. Specific attributes on contact and account are automatically mapped to these two backing addresses. The mapping is two way, so updating the customeraddress record directly also updates the entity. The addresses are only available for listing/managing via an entity’s form navigation menu. The address management screens are not present in the July 2017 release of the UCI client.

One question though, aside from whether you should use the builtin address or not, is how to load them, if you decide to use them. Because the first two addresses are already created, any “insert” operation will leave these blank. If your data source has a primary address identify, you may want to load this into account.address1 (addressnumber = 1). Hence, you need to have a set of addresses, identify the top “1” or “2” addresses, use update, then the rest should be inserted.

Using dynamics-client we can write a small bit of code to do this. dynamics-crm runs under node. You could also do this in any ETL of course.

First we need some helper functions:

 def updateAddress(parentId: String, addressNumber: Int, payload: String) = {
    val q = QuerySpec(
      filter = Option(s"addressnumber eq $addressNumber and _parentid_value eq $parentId"),
      select = Seq("customeraddressid", "addressnumber")
    )
    lift {
      val addresses = unlift(dynclient.getList[CustomerAddress](q.url("customeraddresses")))
      if(addresses.length != 1) throw new Exception(s"Invalid exising customeraddress entity found for ${parentId}-${addressNumber}")
      unlift(dynclient.update("customeraddresses", addresses(0).customeraddressid, payload))
    }
  }

  def insertAddress(payload: String) = dynclient.createReturnId("/customeraddresses", payload)

The update first retrieves the customeraddress via the customeraddress’s parent id then updates the customeraddress record appropriately. Insert is straight forward.

The logic for identifying the primary address needs to come from a SQL command, for example, from the database you are pulling the data from:

 /**
    * Load source side account "location" correctly. First 2 addresses should map to
    * pre-existing addressnumber 1 and 2 and hence should be updates and
    * not inserts. Data should be sorted by crm accountid then sorted by whatever
    * makes the address you want for 1 and 2 appear at the start of the group.
    * 
    * objecttypecode is a string! not a number for this entity: account|contact|...
    */
  val loadAddresses = Action { config =>
    val src: Stream[IO, js.Object] = cats.Applicative[Option].map2(
      config.etl.query orElse config.etl.queryFile.map(Utils.slurp(_)),
      config.etl.connectionFile)(dbStream _)
      .getOrElse(Stream.empty)
    val counter  = new java.util.concurrent.atomic.AtomicInteger(0)
    val xf = xform(config.etl.cliParameters)
    val toPayload = (o: js.Object) => clean(xf(o).asJson)
    val program = src
      .take(config.etl.take.getOrElse(Long.MaxValue))
      .drop(config.etl.drop.getOrElse(0))
      .groupBy(jobj => jobj.asDict[String]("parentid"))
      .map{jobj =>
        val id = jobj._1
        val records = jobj._2
        if(config.etl.verbosity > 0)
          println(s"parentid: ${id}, records: ${records.map(r => Utils.render(r))}")
        counter.getAndAdd(records.length)
        // First two already exist and are updates else inserts
        val updates = records.take(2).zipWithIndex.map{ addr =>
          val parentId = addr._1.asDict[String]("parentid")
          val payload = toPayload(addr._1)
          if(config.etl.verbosity > 2) println(s"Update: parentid=$parentId: ${payload}")
          updateAddress(parentId, 1 + addr._2, payload)
        }
        val inserts = records.drop(2).map{addr =>
          val payload = toPayload(addr)
          if(config.etl.verbosity > 2) println(s"Insert: ${payload}")
          insertAddress(payload)
        }
        (updates ++ inserts).toList.sequence
      }
      .map(Stream.eval(_))

    IO(println("Loading company addresses"))
      .flatMap{_ => program.join(config.common.concurrency).run }
      .flatMap{_ => IO(println(s"# records loaded: ${counter.get()}"))}
  }

The routine method is all you need and took about 30 min to write. There is a standard xf that interprets command line parameters to add, drop or rename attributes. The core logic is in the val program = ... part. Here we just group the input (the input must be sorted on the parentid and address order), take the first two and perform updates while the remaining are inserted. Since the database input probably has some fields that should not be inserted into dynamics, we can specify that those be dropped via the standard xf CLI --drop ‘mysortfield1|mysortfield2’.

That’s it!