Author: Derek

  • Creating a new Developer Toolbox

    As of late, I haven’t blogged much, but I’ve been continuing to look at Blazor and find ways to learn, build, and experiment with it. I must say, I really love Webassembly as a client-only platform.

    So I built a utility app for myself, and all of the developers out there that end up googling a few basic tasks on a regular basis. I call it MyDev.Tools and you can check it out at https://mydev.tools

    MyDev.Tools has the following utilities:

    I will continue to add tools as I get feedback. I have a few more queued up for the next release.

    Coding is fun, and should add value. My hope is to have a reliable one-stop-shop for some basic tasks. Thanks for checking it out.

  • Dad Status Indicator

    Dad Status Indicator

    It’s April, 2020. Since mid-March we have been told to work exclusively from home. Schooling has moved to ‘distance learning.’ Statewide ‘stay-at-home’ orders have been issued across the country and beyond. This is COVID-19 and the new realities facing ours and every culture worldwide.

    How do I respond? Build something.

    Working from home isn’t new to me, but WFH with all my kids also in the house is a totally different dynamic. And with most of my workday spent interacting with my coworkers via web calls, the need for some aspect of privacy is more important than ever.

    Hence, the Dad Status indicator.

    A simple way to show my family whether or not I am available to solve their problem, hear their tale of woe about the terrible thing their sibling did, or make lunch.

    The Dad Status Indicator (DSI) is a simple IoT rig using a Particle Photon and a couple colored LEDs. I also put together a quick web-app so I can toggle the status from my browser or phone.

    Materials

    • Picture frame
    • foam core
    • colored card stock
    • 1 Particle Photon
    • breadboard
    • 1 RED LED
    • 1 GREEN LED
    • LED holders
    • 1 2.2k resistor
    • 18 gauge wire

    After printing the status message and text on to the card stock, I cut the foam-core to size to fit right in the picture frame. The card stock panels are then glued to the foam core.

    I used a nailset to punch the holes through for the LEDs to be inserted.

    There is no glass on the frame, mostly due to how the LEDs project out a bit.

    Wiring

    I used 2 digital pins on the Photon for the LEDs, and put the resistor inline with GND. Remember with LEDs the short leg is GND and the long leg is power.

    Code

    Particle is largely an event based platform, so subscribing to an event stream is pretty straightforward.

    Subscribe

    Here’s the Firmware code used to receive messages from the event topic and act accordingly:

    // Daddy status indicator
    
    int greenLed = D0; 
    
    int redLed = D3; 
    
    void setup() {
      #if defined(DEBUG_BUILD)
        Mesh.off();
        BLE.off();
      #endif
    
      // init pins
      pinMode(greenLed, OUTPUT);
      pinMode(redLed, OUTPUT);
      
      // subscripbe for events  
      Particle.subscribe("daddy/status", statusHandler);
      
      // init status as available
      digitalWrite(redLed, LOW);
      digitalWrite(greenLed, HIGH);
    
    }
    
    // loop method not needed
    void loop() {
    }
    
    
    // event subsription handler
    void statusHandler(String event, String data) {
        if (data == "available")
        {
            digitalWrite(redLed, LOW);
            digitalWrite(greenLed, HIGH);
        }
        else if (data == "occupied") 
        { 
            digitalWrite(redLed, HIGH);
            digitalWrite(greenLed, LOW);
        }
    }
    

    Since the events are published using the Particle API, you do need to subscribe using the ALL_DEVICES parameter, and publish the event as a Public event.

    More information about the Particle.subscribe() usage can be found on the Particle Docs site.

    Publish

    To publish the events, I’m using the REST API provided by Particle. You need to generate an access token (I used the Particle CLI tool for this) and include that in your API Post.

    Here’s the bit of Javascript used to publish to the API

    var available = 1;
    var occupied = 0;
    
    var apiUrl = "https://api.particle.io/v1/devices/events";
    var token = "[[insert Particle access Token here]]";
    
    function UpdateAvailability(state) {
        var data = {
            name: "daddy/status",
            data: state ? "available" : "occupied",
            private: "false",
            ttl: "60"
        };
    
        fetch(apiUrl, {
                method: 'POST', // or 'PUT'
                headers: {
                    'Content-Type': 'application/json',
                    'Authorization': 'Bearer ' + token
                },
                body: JSON.stringify(data),
            })
            .then((response) => response.json())
            .then((data) => {
                console.log('Success:', data);
                WriteSuccess(state);
            })
            .catch((error) => {
                console.error('Error:', error);
            });
    }
    
    function WriteSuccess(state) {
        var feed = document.querySelector("#messages");
    
        let newEl = document.createElement('li');
        newEl.classList.add('list-group-item', 'text-white');
        newEl.classList.add(state ? 'bg-success' : 'bg-danger');
        newEl.innerText = 'Status set to ' + (state ? 'available' : 'occupied') + ' at ' + new Date().toLocaleString();
        feed.prepend(newEl);
    }

    After each post, I am displaying a status message in a feed.

    Web app

    The web app is super simple. Just a couple buttons to trigger the post and a feed showing the history.

    As the buttons are clicked you can see the message list update.

    All the code for this project can be found on Github at https://github.com/smithderekm/daddy-status-indicator

    Future Enhancements

    Now that the foundation is set, we can build upon it.

    • Use Dash button intercept to change status – single click for Occupied; double-click for Available;
    • Subscribe to Outlook calendar to get meetings feed and automatically update status at start and end of each meeting.

  • DIY Quad Monitor Stand

    DIY Quad Monitor Stand

    While working from home has been a normal for me for some time, the 2020 COVID-19 social distancing mandate has added another element to making my home office even more functional. This time around, I wanted to finally get a rig set up so that I could effectively use all four of the LCD monitors in my possession.

    So, in the spirit of DIY and building things out of my favorite material – 3/4 in plywood – I set out to design a quad monitor stand that could sit atop my desk and fit under the shelving above.

    The Before Shot

    Here’s how I was previously set up with only 2 monitors on a simple riser. Insert blanket “Please excuse the mess” comment here.

    The total spacing between the desktop and the shelves about is about 29 inches.

    Planning out the build

    I looked at a lot of pre-fabricated and other DIY designs. Prices for 4-monitor stands online ranged from $50 to $100. There were a few 2 and 3 monitor designs I found that used black pipe. In the end, I went for a lattice type design rather than having a single central vertical component with horizontal ‘wings’. I suppose if I was using metal components that would work, but with wood I was worried the weight would be too much to support.

    I began by laying out the monitors on the floor and taking measurements.

    Since each pair of monitors was a different size, I aligned them on the mounting brackets. I was okay with the gap in the middle since I planned to have my webcam sit in that space.

    One thing I also had to consider (though in the end I did not make any adjustments) was that each set of monitors was a different thickness. The older monitors were a full inch thicker than the newer ones.

    The Design

    I decided to use 2 vertical supports aligned on the monitor mounting locations, connected with 2 horizontal pieces for rigidity. I would then simply add small feet to the base on each side to help the rig stand freely and not tip forward.

    Here’s the cutlist I ended up with:

    DimensionQuantity
    Vertical Support3/4″ x 2 1/2″ x 27″2
    Horizonal Support3/4″ x 2 1/2″ x 23 1/2″2
    Feet3/4″ x 2 1/2″ x 6″4
    Feet Spacers3/4″ x 2 1/2″ x 2 1/2″2
    Small mounting plate1/2″ x 4 1/2″ x 3 1/2″2
    Large mounting plate1/2″ x 4 1/2″ x 4 1/2″2

    I made the monitor mounting blocks using 1/2″ plywood. I had a fabricated VESA mounting plate that I could use to transfer the screw placement on to these blocks.

    Using a table saw to rip down the plywood, a miter saw to cut length, and a drill to create the screw holes, I had all my components ready.

    Mounting plates. Screw hold placement transferred from an existing mounting plate.
    Top: feet; Middle: horizontal supports; Bottom: vertical supports;

    Assembly

    Assembly was pretty straightforward, though I did a lot of dry fitting to check alignment with the actual monitors to make sure that everything lined up as expected. I also had to ensure the vertical placement of the mounting plates was correct so the monitors were not too high to fit under the shelves above my desk.

    I used 1″ drywall screws to attach all pieces together. I chose not to use glue but it wouldn’t hurt to ensure the security of all the joints.

    Initial setup of frame
    Added second horizontal support after checking positioning against actual monitors.
    Mounting plates attached. I used additional screws here to account for weight of monitors.

    To ensure the placement, I did another dry fit against the monitors to ensure the spacing and vertical placement was correct. In my case the lower plates are 7 3/8″ up from the bottom, and the uppers are 19 3/4″ up.

    Feet attached

    Hardware

    I’m guessing it’s a universal standard, but the mounting screws used are metric with the following spec: M4-0.7.

    At Home Depot I found both 25mm (about 1″) and 30mm (about 1 1/4″) length screws. These are not on the big racks of packets in the hardware aisle, but rather in the drawers with more specialized hardware.

    I also used small washers – and in the case of the 30mm screws, I used 5 washers to create a spacer.

    Mounting the monitors was the moment of truth. Would the plates and screw holes all line up. Wow – got it on the first try!

    Top is 25mm screw with 1 washer. Bottom is 30mm screw with 5 washers.

    The After Shot

    All monitors mounted beautifully. Notice the 1 1/4″ hole I had to drill in for the lower monitors. Some design genius at Lenovo decided putting the power cord outlet directly below the mounting plate was a good idea. This hole allows the cord to go directly in, and in the long run might be the only flaw or risk in my design since obviously removing 60% of my vertical support there weakens the integrity of the plywood. So far, though, it appears to be okay.

    I also found that the mounting plate for the lower-left monitor appears to be slightly out of alignment (as shown by the clockwise misalignment in the gap between the top and bottom monitors.) I will need to adjust this slightly.

    Here’s the final install on the desktop. I have 2 computers (desktop and laptop) so here’s the configuration I used to wire up each:

    Laptop: HDMI out goes to lower left; USB-toVGA adapter goes to lower right; VGA out goes to KVM switch tied to upper right.

    Desktop (has 2 DisplayPort outputs): DisplayPort-DVI #1 goes to upper left; DisplayPort-DVI #2 goes to upper right;

    When I’m in work mode, I have the laptop driving 3 out of 4 monitors (shown with grey background), and only the upper-left one is displaying for the Desktop PC (with blue background.) I am using Mouse without Borders to create a seamless drag experience between all 4 monitors.

    In non-work mode, I can have my Desktop PC drive the two upper monitors.

    I am really satisfied with how this turned out. I spent about $5.00 on the hardware and used up several pieces of plywood scrap. Now, let’s get this WFH thing rolling!

  • Year of Health – Baseline Allergy Measures

    Year of Health – Baseline Allergy Measures

    One of my 2019 goals is to Eat like a Vegan, motivated by a desire to overcome some longstanding health conditions.

    One step along this path was requesting a Food Sensitivity blood test during my last annual physical, which happened to be during the last week of 2018.

    Here are the results, which represent not just some good data in support of veganism, but also provide a baseline from which to track. I’m thinking I can repeat this blood test (or one like it) quarterly to try and measure actual progress and impact of my dietary changes.

    Here’s the key for these measures:

    Class IgE kU/L Interpretation 

    0 <0.10 Negative 
    1 0.10-0.34 Equivocal
    2 0.35-0.70 Low 
    3 0.71-3.50 Moderate 
    4 3.51-17.50 High 
    5 17.60-50.00 Very High 
    6 50.10-100.00 Very High 
    7 >100.00 Very High 

    ComponentYour ValueStandard Range
    IgE6,610 IU/mL4 – 269 IU/mL
    IgE Allergen Clam0.37 kU/L<0.10 kU/L
    IgE Allergen Cod Fish0.14 kU/L<0.10 kU/L
    IgE Allergen Corn (Maize)10.70 kU/L<0.10 kU/L
    IgE Allergen Egg White0.27 kU/L<0.10 kU/L
    IgE Allergen Milk, Cow0.21 kU/L<0.10 kU/L
    IgE Allergen Peanut35.10 kU/L<0.10 kU/L
    IgE Allergen Shrimp1.66 kU/L<0.10 kU/L
    IgE Allergen Soybean0.46 kU/L<0.10 kU/L
    IgE Allergen Walnut0.27 kU/L<0.10 kU/L
    IgE Allergen Wheat2.62 kU/L<0.10 kU/L
    IgE Allergen Scallop7.15 kU/L<0.10 kU/L

    Peanut is no surprise. That is one allergen I have long known was a problem and have no problem avoiding.

    Wheat (and I assume therefore gluten) is also been on my suspect list for some time, and while I have not fully adopted a gluten free diet, I can now at least have a data point to support doing so.

    Corn, however, is pretty striking, especially given its 100-fold value over the non-reactive threshold. So that is definitely one to pay attention to. Avoiding corn will probably be the hardest to force myself to do, since I love me some tortilla chips, popcorn, and corn is used pretty universally as an ingredient.

    The seafoods are noteworthy – but I’ve never been much of a seafood fan, so again, avoiding these will not necessarily be problematic.

    Egg and milk are in the ‘low’ range, but true veganism will drop those out as well.

    I suppose the other interesting observation is that I have some reaction to ALL the allergens, and that my total IgE count is almost 25 times the standard range. That alone suggests and reinforces my feeling of general un-health. Clearly, my body is reactive, and the more I learn about gut health and immune response, these numbers all confirm that my digestive health will has a long way to go.

    Here’s to green smoothies!

  • Spaghetti was easier

    Spaghetti was easier

    A couple weeks ago, I saw the following image in my Twitter feed:

    I, for one, am old enough to have lived through all 3 of these paradigms for software architecture. The team on which I am a part has been strategically tasked largely with taking a spaghetti codebase and moving to ravioli. So the analogy fits (though in retweeting this pic, I did claim to be a bit more of a tortellini fan myself.)

    That said, what is not illustrated in this infographic is the complexity associated with each form. Arguably, spaghetti is easier, mostly because you can avoid caring about things.

    Attention to architecture creates a burden that developers must appreciate in order to deliver.

    Case in point: If I don’t care about architecture, I can create a single web page that connects directly to my database from the server-side code-behind. This was not unusual in the spaghetti days. Every page opening its own database connection. Defining its own objects (that is, unless you just use the DataRow.) In the spaghetti days, there was no Pragmatic principle of Don’t Repeat Yourself. We were proud of ourselves for making it work, and moved on to the next strand.

    Today, our system has the following flow to query a database:

    Client UI Page -> Controller -> API Wrapper -> API -> Provider -> DataStore -> ORM -> Database

    Now we have a Visual Studio solution containing 10 or 12 assembly projects to accomplish the same thing we used to do in a single page. We transform a database record to an entity, then to a Data Transfer Object, then to a Query result, then to a json object, then to a result object, then to a model.

    Because now we care. We’ve learned to care over the last 3 decades because there is inherent value in the architectures when it comes to code reuse, reliability, and testability. But now, ravioli making takes time.

    https://tenor.com/view/ravioli-pasta-dinner-cooking-gif-3455458

    In my own journey towards ravioli, I’m learning to appreciate the steps. The beauty of component parts, each tasked with a single purpose, able to live in their own cozy pocket of dough. But the cost of that coziness is time, fore-though and planning. It’s an investment, and reminds us that software, like artisan pasta making, is a craft.

  • 2019 Goals (in one Tweet)

    2019 Goals (in one Tweet)

    Ah, the new year. Time again to set some goals for personal and professional development. I’m going simple this year, as tweeted:

    Let’s break these out a bit more.

    Code More.

    One of the big changes in 2018 was taking a new job as Principal Software Engineer at FM:Systems. In this role, much more than my previous job as Director of a consulting division, I get to write code. Lots of code. Every day code and more code. It’s fantastic.

    So Goal Achieved, right? Yes, but there’s more. Returning to a full engineering role has reminded me that there is so much out there that code can do, but also so much that needs practice and coaching. Coding, like any skill, can have both a certain muscle-memory, as well as atrophy when not exercised. So the practical application of Code More is to really focus on depth of knowledge with .Net Core and C#, as well as best practices for architecture, unit testing, and new things like Infrastructure-as-Code.

    This Goal is a bit hard to measure per se, so my metric will be blog posts for things I’ve learned, tried, continue to need practice in, and success stories.

    Eat like a vegan

    For most of my life, I have dealt with a couple chronic health conditions, namely asthma and eczema. Over the past decade I’ve dabbled with a variety of what I’ll call non-medical solutions, including acupuncture, cutting out or minimizing certain foods, and supplements of all kinds. In short, nothing has moved the needle.

    This past week, my wife and I watched Eating you Alive, a movie promoting adoption of a Whole Food Plant Based diet. Participants in the documentary told story after story about how giving up meat, dairy, sugar, and oils had dramatic impacts on their health, including reversal of cardiovascular conditions, cancers, and obesity.

    Thankfully, I am in generally good health. I have been blessed (cursed?) with an ectomorph body type. My weight has been unchanged for 20 years. I feel good when I work out (as well as any 40 something, right?) and I’ve been spared any major health incidences.

    But I don’t feel healthy. And clearly, as evidenced by my itchy red skin and wheezy congested lungs, there are some things, shall we say, out of whack!

    Adopting a vegan diet is not going to make me happy. I have a real fondness for steak. I am probably addicted to sugar. But for at least a month at a time, I’m going to try. Here’s to kale in 2019.

    To track progress on this goal, I don’t plan to write out my favorite recipes -but rather be an engineer about it. I’m working up a series of actual, measurable, health tracking values that I can monitor and report upon. Since redness in my face is my primary visible symptom, I’ve thought about taking pictures in a controlled location/environment, and using histogram data to track my actual appearance. More to come on that.

    Build fun lamps as a hobby

    All my life I’ve been a builder. I am comfortable working with my hands, I have a full workshop of tools, and in 2017 I took on the significant project of finishing my basement. However, like coding, I wanted to define some parameters around building that allowed me to really practice my skills. To that end, I have decided to build lamps.

    Lamps are simple objects. But they can take unlimited form. They can be made of wood, pipe, or found objects. They can be upcycled, recycled, made out of uni- bi- tri-cycles. They can be classic or modern, industrial or country. Basically, there is no wrong answer.

    They don’t know it yet, but my metric of this goal is to produce enough lamps during the course of the year to give as Christmas gifts to each of my family members. Parents + 2(siblings) + 2(inlaws) + wife + 4(children) = 10 creations. Roughly one per month.

    I’ll use this blog to post my designs, and where appropriate, talk about the challenges or discoveries I made along the way for each one.

    So that’s it! 2019 is planned and sized and ready for action. I’m excited to share my progress. Thanks in advance to those who might follow.

    Photo credit: PlusLexia

  • My new Microjobs are live

    My new Microjobs are live

    I am really excited to announce that I am joining Collab365 MicroJobs – a brand new marketplace dedicated to Microsoft professionals. I have launched 2 initial offerings.

    The first is focused on organizations who believe they may benefit from developing an Internet of Things strategy.  To these organizations, I will host a 1 hour IoT brainstorming session.

    Here are 4 reasons that the Collab365 Team have spent months building the site:

    1. You often need expert Microsoft help just for a couple of hours.
    2. You can’t keep up with everything Microsoft is releasing.
    3. You find it hard to find Microsoft experts on other non-dedicated sites. There are just too many other subjects covered.
    4. You don’t have time to go through a lengthy interview process.

    Here are the details of this offering:

    How I can help you …

    The Internet of Things is revolutionizing every type of business in every sector?  But with so much to sift though, it can be overwhelming to know how to get started.

    Do you think your business, company, industry could catch the IoT wave, but you’re not sure where to begin?  This one hour brainstorming session will help you begin to form a strategy for considering, planning, and implementing an IoT solution to digitally transform your business.

    In this session, we’ll explore questions like:

      1. What are the unknown data points in our business that need to be captured?
      2. How could an IoT solution open up a new revenue opportunity for our company?
      3. Where does our company spend the most time/money fixing things after they become problems?

    This session will be recorded and provided to you for ongoing review, and follow on planning sessions can be added to continue your IoT journey.

    We will perform this session as a 1 hour Skype call (or other similar web conferencing platform.)

    How does it work and what about payment?

    Paying for online services with people that you don’t know can be worrying for both parties. The buyer often doesn’t want to pay until they’re happy that the Provider has completed the work. Likewise the Provider wants to be sure they will be recompensed for their time and commitment. Collab365 MicroJobs helps both the buyer and the seller in these ways:

    1. The buyer pays up front and the money is securely held in the MicroJobs Stripe Connect platform account.
    2. The Freelancer can then begin the work in the knowledge that the payment has been made.
    3. Once the buyer is happy that the work is complete and to their satisfaction, the funds become available to the Freelancer.
    4. There’s even a dispute management function in case of a disagreement. But it won’t on my MicroJob! As long as we agree what’s needed up front and keep talking the entire way through, you won’t be disappointed.

    Note: Once I’ve completed the work, I’d love it if you could write a review for me. This will allow others to see what a fantastic job I did for you.

    What if we need to add extra’s to the job after I’ve started?

    It’s really easy for us to discuss your extra requirement (using the chat feature on the site) and for us to agree a price and add it to the order.

    If you’d like me to help you, here are the steps to hire me …

    1. View my MicroJob.
    2. On that page click the “Buy” button.
    3. You’ll need to register as a buyer on the MicroJobs site, but this only takes a minute and will also allow you to purchase MicroJobs from other awesome Freelancers.

    If you need to contact me then please use the “contact” button and ask me any questions before purchasing.

     

  • Why I took a new job

    Why I took a new job

    Credit:http://www.uberoffices.com/

    As has been noted on my blog bio (shown below) and my LinkedIn profile, I recently started a new job.  Generally speaking, this isn’t Earth shattering news.  People start new jobs all the time.  In fact, among even the biggest tech firms, the average job tenure is just around 2 years.  For me, I was at my previous employer for almost 8 years.  Still, this was not a haphazard or spontaneous decision, and my reasoning, though unique to me, I think carries a few principles worth sharing.

    For context, the job I left was as director of a division within a small professional services firm. The job I took is as a principal software engineer for a small but growing software product company.

    I was ready for a new challenge.

    Between 2001 and 2018, I have largely done professional services and consulting work.  I worked for myself and as independent contractor for a time.  I worked for the consulting arm within a larger software company.  Then I worked for a pure professional services firm.  In all of these contexts, the work was generally the same:  find a customer with a business need, determine and design a software-based solution, then build that solution as cost-effectively as possible.  These projects were almost exclusively billed by the hour, and had fixed (and typically aggressive) timelines to deliver.

    Consulting is something I think I have particular skills at, but it is a uniquely constrained way of doing business.  I found, whether working for myself, or working for someone else, that there is a big limit on freedom in this model.  You end up only doing the work people hire you for, whether or not it’s using the latest tech, and you inevitably take technical shortcuts or skip over best practices in order to deliver within the terms of the budget.   Consequently, after 17 years of operating in this model, I was craving a blue sky opportunity without many of the anchors that project-based, billable hours consulting intrinsically bring to the job.

    That said, consulting is not all bad.  The steady stream of new industries, customers, problems to solve, and yes, various constraints all serve to make you think differently and master the skills of problem solving.  Learning and practicing task estimation and time management is key in any role.

    So now, the new challenge is thinking like a product developer.  Building features, not customizations.  Working on a release schedule rather than a customer timeline.  These are different types of problems to solve, and ones I happily confront.  In short, I reached a time where I was ready to solve new challenges that were unfamiliar, rather than the ones I had seen repeatedly through my consulting career.

    I wanted to catch a wave.

    Part of my new role, and a large part of the corporate strategy of the company in which I am now employed, is embracing the Internet of Things.  IoT is clearly the buzzword du jour in 2018, and it’s a wave I knew I would regret missing.

    I am of a generation that graduated from university in the late 1990s, at the height of the dotcom bubble.  My first job out of college, however, was not with a VC-backed startup, but rather in corporate IT.  While I don’t regret taking a job at a more established company, not being part of that trend is something I’ve always held as a gap in my experience.

    In 2007, when Apple released the first iPhone and largely started the smartphone revolution, I was starting a new job with a software company focused on web apps.  I never leared xCode.  I never really got engaged in mobile development, apart from various proof-of-concept apps or demos for presentations.  This wave of development passed me over, and it’s something I’ve always been frustrated about.

    In 2018, at the middle of my career, I was determined not to let another massive tech wave pass me by.  My job search was consequently narrowly focused on companies building or adopting IoT solutions.  I looked specifically for organizations that demonstrated a commitment to IoT as a core foundation, not a passing fad.  And, thankfully, I found both.

    I needed to refresh my skills.

    While I have been a coder for nearly my entire career, professional services projects don’t always provide opportunities to write code.  Further, many times, the code being written is not large enterprise systems, but one-off point solutions.  Consequently, I found myself in a place where many of the coding skills (and habits, frankly) that I employed were due of a refresh.

    This facet of the decision is largely what moved me out of a management role and back in to an engineering role.  I wanted to get back to coding, and be able to apply my experience building systems in an environment that would force me to learn the techniques I lacked the opportunity to hone in my previous work.  In fact, after only a few days on the job, interacting with members of my team and reviewing code others had written, I saw pretty quickly what areas I need to refresh.  I built a learning plan, and have been able to get ramped up on the concepts.  What’s more, is I now have bona fide work to do that allows me to apply these lessons.

    Bottom Line: Don’t be afraid to jump

    One of the hardest aspects of this career change was reaching a point where I could allow myself to make a choice for my own benefit, rather than worrying about the outcome or impact of me leaving my previous job.  I needed to essentially be selfish.  In that choice, though, I have been able to advance both my personal growth and career track.  So if there is a lesson to take away, it’s that you are in control of your path, and you shouldn’t be afraid to cause disruption along the way.

    Note: my hope too in this change is that I can renew my blogging habits.  Look for new posts about IoT, Azure, and working on a great team.

  • PowerApps Set SharePoint Person field to Current User

    PowerApps Set SharePoint Person field to Current User

    It seems like such a typical request.  And yet, so many hoops to jump through.

    I’ll note at the onset that this post is based on PowerApps 2.0.650

    My scenario is familiar.  I have a SharePoint record, and I am using a PowerApps screen as an Approval type form.

    In my SharePoint item I have fields for Approved (Date/Time) and ApprovedBy (Person or Group).

    So when I save my form, I’d like these fields to be populated with the current time and current user.

    PowerApps basically provides three avenues for updating a record: SubmitForm(), Update(), and Patch()

    SubmitForm is the simplest and uses databound fields.

    Update() allows you to select and update a record (or records) with any values you want, either from your form or otherwise.  But, if you don’t provide a value for a column, it will be blanked out.  Not the greatest.

    Patch() does a bit better than Update in that it will allow you to selectively update a single column and leaves all other values intact.  However, you have to map every column you want to update, so if you have a record with lots of fields, you would have to set values for them all in your Patch() statement.  Annoying.

    So what I really wanted to accomplish was to set default values for my databound controls that may to the Approved and ApprovedBy fields, then use SubmitForm() to sweep up all the values and update the record.

    Here’s the approach that worked, and properly passes the Current User through to the Person and Group field:

    1. Set the OnVisible property of my Screen to set a Context Variable with a representation of the current User.
    2. Add a DataCard for the Person field to my EditForm.
    3. Set the Update property of the DataCard to the name of the Context Variable.  This becomes the value passed by the SubmitForm() method to the column in my datasource.
    4. Set the OnSubmit() property of my Save button to SubmitForm(FormName)
    5. Set the Visible property on the DataCard to False so it is not shown on the form.

    Let’s break this down first by looking at how the Person record is represented.  PowerApps uses JSON style notation for objects.  In this case, there is a special object for a SharePoint User object, as seen below:

    { 
      '@odata.type':"#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser", 
      Claims:"i:0#.f|membership|" & Lower(User().Email), 
      Department:"", DisplayName:User().Email, 
      Email:User().Email, 
      JobTitle:".", 
      Picture:"."
    }

    A few things to note:

    1. If you’ve used SharePoint on-premise, then the formatting of the claims username may appear odd, but it works for Office365.  If you are connecting to an On-Premise SharePoint data source using a Gateway, then this format may need to change for Windows Accounts.
    2. From what I could tell, the Email property is the most important – even if the other properties (Department, DisplayName, JobTitle, Picture) are not populated, SharePoint will identify the correct user.
    3. Be sure to use the SPListExpandedUser data type.  There is also SPListExpandedReference (used for Choice fields) but it will fail when using a Person field.
    4. You’ll get an error saying ‘A value must be provided for item’ if anything is incorrect.

    So by setting a ContextVariable when the screen loads, I can then pass that object as the properly formatted value during the SubmitForm() call.

    Once the variable is set, we then use it on the data card as the Update value

    Things like this definitely make PowerApps a tool that requires some technical skills to achieve various business requirements.  But as the product matures, I am hopeful that shortcuts will be implemented to make these common scenarios less cumbersome to address.

  • Requested Value ‘Text’ was not found error using SharePoint Online CSOM

    Requested Value ‘Text’ was not found error using SharePoint Online CSOM

    I’ve recently been involved in creating a SharePoint Hosted add-in for a customer, and part of that process includes creating site columns, content types, and libraries in the host-web where the add-in is deployed.  The Office Dev Patterns and Practices team has a great starting point example for this technique using the JavaScript (JSOM) api.

    All was going well – we built and tested our code against a Developer site collection in our Office 365 tenant, and then packaged it up to deploy in the customer’s tenant.

    But no sooner did we attempt to run the provisioning script did we get the following error when checking for site columns

    This was a Tuesday morning.  We reviewed and reran the add-in on our Dev tenant, and it continued to work.

    At first I thought maybe it was something having to do with the asynchronous callbacks from the JSOM api – and that we were dealing with a timing or sequence issue.  But as I debugged it revealed that the error was thrown on the initial query – never allowing subsequent calls to be made.

    Here’s the basic code I was running to query the list of site columns from the host web.

    As I continued to troubleshoot on my Dev tenant, suddenly, I started getting the same error in that environment.  Strange, right?  This delay made me wonder if there may have been a rollout of a CSOM update that hit my client’s tenant that morning, then my tenant by the afternoon.

    I worked with my customer and we opened a MS Support ticket.

    The support engineer ran a similar API call against a tenant in the same datacenter as ours, and it came back clean.

    We did a screen-share, and I demonstrated the error and provided him the CorrelationID returned in the response.  From there he was able to see a stack trace indicating that the failure occurred on a Enum.TryParse() method call.  Interesting.  He confirmed our code looked okay, and escalated the case to a Development engineer.

    On our second call, we again demonstrated the error when calling the api against our tenant and our client’s.  At first the MS engineer thought it may be related to lookup type columns, so we manually removed a couple that were User and Group type columns.  No change.

    We then decided to try to manually remove all the custom columns and reprovision from scratch.  Upon clicking on the first one (called ‘Current Item Title’) we got an error in the UI.  That was unexpected.

    One thing I noticed, however, was that the Current Item Title column, which should have been a ‘Single line of text’ was actually showing as a ‘Hyperlink or Picture’ type.

    I returned to my code and checked my manifest of field definitions.  Quickly I realized the issue, as shown in the before and after images below.

    The Field Type=’URL’ indicated that it was a Hyperlink field – but the Format=’Text’ was causing the exception, since the value ‘Text’ was not a valid option in the SPUrlFieldFormatType enumeration.  Suddenly it all came clear.  I’d copied and pasted the field definition, and changed the wrong parameter.

    We had to manually remove the column using PowerShell (since it caused the error through the UI).  I then corrected the field schema definition and republished my add-in.  This time through, the provisioning script ran clean, and correctly created the Current Item Title column as a text field rather than URL.

    UPDATE: Here’s the PowerShell to remove the field.  Basically we just retrieve it by name, then delete it – all using the SharePoint Client Object Model (CSOM).

     

    (I did complain to the MS engineer about why the API would allow me to pass in a bad value when creating the field, yet throw an error when trying to retrieve the field.  He offered me the url to the UserVoice for SharePoint where I could submit my complaint.)

    This cut-and-paste error cost our project a full week on the timeline (due to the back and forth delays with the MS escalation, etc.) and required 2 Microsoft engineers to assist in resolving.