-->

aws lambda

By enabling Function URLs, you can now access Lambda Public URLs.

Lambda is AWS’s Function-as-a-Service(FaaS) service that gives us a serverless and event-driven compute service. Lambda functions can play an essential role in a microservice architecture. However, it had one significant limitation until recently: You couldn’t invoke a Lambda function as an API endpoint on its own. It was only possible using other services such as Amazon API Gateway. With the recent announcement, we can now create our APIs by only using the AWS Lambda service. Let’s take a look.

Part 1: Basics

Create a New Lambda Function

Let’s go ahead and create ourselves a simple Lambda function using AWS Management Console:

  1. Go to Lambda Dashboard (In this example, I’m using the us-east-1 region. If you prefer another one, switch to that region in the console)
  2. Click Functions on the left pane and then Create function button.

Create Lambda Function - Step 1

  1. Keep the defaults in the Basic information section.

You can also expand Advanced settings and tick the Enable function URL option but we are going to do it later in this post.

  1. Click Create function

Enable Function URL

Now that we have a function available go to the Configuration tab and click on the Function URL. Then click Create function URL button. We’ve now come to the screen that was presented to us in the Advanced settings when we were creating the function:

Configure function URL section showing AWS_IAM selected by default

We are only trying to see a Lambda function we can call from the outside at this step. So, to keep things simple, let’s choose NONE as Auth type.

Click Save to update the settings.

Now we are back on the Function overview page, and we can see our newly generated URL:

Function URL highlighted

and click on that link to see the URL in action:

Simple public function call result

Part 2: Advanced Topics

Now that we have a working publicly available API let’s dig deeper into authentication, CORS, custom domains and how this feature compares to API Gateway.

Authentication

In Part 1, we briefly saw there are two authentication options:

  • AWS_IAM
  • NONE

Auth type: NONE

We chose NONE to keep things simple. However, even though we selected NONE, AWS still created a policy for us and added to functions permissions. We can view the created the policy in Permissions section:

First, click Permissions on the left pane.

Permissions highlighted in the Configuration tab

Then scroll down on the right to the Resource-based policy section and click View policy document:

Recource-based policy section

and you can see the policy document that allows everyone (“Principal”: “*”) to invoke the function URL (“Action”: “lambda:InvokeFunctionUrl”)

Policy document

Without this permission in place, we wouldn’t be able the invoke the function. To test that, we can delete the policy statement and try the URL again:

Delete policy confirmation

If you scroll up and call the URL again, you will get a Forbidden (HTTP 403) error.

Forbidden

It’s easy to put that permission back in. Click on Add permissions. Select Function URL. Leave the defaults (Auth type: NONE, Statement ID: FunctionURLAllowPublicAccess, Principal: *, Action: lambda:InvokeFunctionUrl) and click Save.

Add permission

And if you click on the URL, you should see the “Hello from Lambda!” message again.

Auth type: AWS_IAM

Now let’s consider a scenario where we don’t want our API publicly available. For example, we may choose to grant access to a specific user. This might be useful for testing a beta version internally before making it public.

In this example, I’ve created a new policy with lambda:InvokeFunctionUrl permission on our demo Lambda function:

Invoke Lambda IAM policy

Policy document:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "lambda:InvokeFunctionUrl",
      "Resource": "arn:aws:lambda:us-east-1::function:PublicUrlDemo"
    }
  ]
}

Then, I created a user with this policy attached:

Invoke Lambda IAM user summary page

Finally, I went back to the Lambda function permissions and updated auth type allowing access only to this user:

Updated auth type settings

Even though we changed the auth type to AWS_IAM in permissions, we still need to ensure the auth type is set to AWS_IAM in Function URL settings. If there is a mismatch, the console will give us a warning:

Auth type mismatch warning

If we click on the URL again, we get a Forbidden error.

To make it work, we need to sign the request with our new IAM credentials (as in the access key and secret key we noted down when we created the user). In this example, I used Postman to enter the credentials and sign the message:

Signed request

and we can get our response successfully:

Signed request output

Pricing

In terms of pricing, there is no extra charge for using function URLs. The cost of Lambda executions is calculated the same way whether or not they were invoked from a browser over the public Internet or a CLI inside your company network. The details of pricing can be found here

The duration is calculated as per the below rule:

Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 1 ms*

This means if you implement authentication and receive unauthenticated calls, you don’t pay for those calls, which might be helpful to be charged for DDoS attacks.

HTTP Methods

When you click on the function URL link on AWS Management Console, you send a GET request to the endpoint. I also used the GET verb in the example above. Actually, the endpoint supports all HTTP verbs.

For example, if you run the curl example below and send a DELETE request, you still get the same “Hello from Lambda” response with an HTTP 200 status code.

curl --location --request GET 'https://{ REPLACE WITH YOUR APIs SUBDOMAIN }.lambda-url.us-east-1.on.aws/' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data-raw ''

I don’t think I would need to support multiple methods in my Lambda function, but it is possible to take different actions based on the HTTP method. The following NodeJS Lambda example demonstrates that:

exports.handler = async (event) => {
    let httpMethod = event.requestContext.http.method;
    console.log('event:', event);
    let responseMessage = '';
    switch (httpMethod) {
      case 'GET': responseMessage = 'GETting something'; break;
      case 'DELETE': responseMessage = 'DELETEing something'; break;
      default: responseMessage = 'doing something else';
    }
    
    const response = {
        statusCode: 200,
        body: JSON.stringify(responseMessage),
    };
    return response;
};

Paths and Query Parameters

Similar to supporting different HTTP methods, we can also access the path and query parameters and do something with them if required.

For example, I modified the code to handle this type of scenario:

Query string example:

exports.handler = async (event) => {
    let requestPath = event.rawPath;
    let requestQueryString = event.rawQueryString;
    let responseData = {
        path: requestPath,
        queryString: requestQueryString
    };
    const response = {
        statusCode: 200,
        body: JSON.stringify(responseData),
    };
    return response;
};

Command to run:

curl --location --request GET 'https://{ REPLACE WITH YOUR APIs SUBDOMAIN }.lambda-url.{ REPLACE WITH YOUR REGION }.on.aws/customer/search?name=john' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data-raw ''

Output:

Query string response

URL path example to get the details of the customer with id 123456:

Command to run:

curl --location --request GET 'https://{ REPLACE WITH YOUR APIs SUBDOMAIN }.lambda-url.{ REPLACE WITH YOUR REGION }.on.aws/customer/123456' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data-raw ''

Output:

URL path response

Lambda Function URLs vs Amazon API Gateway

In the introduction section, I mentioned that we used Amazon API Gateway for public APIs before the function URLs feature. So even though the feature helps a lot for specific use cases, I think it’s worth comparing both to have more realistic expectations.

Amazon API Gateway features

  • Authentication and authorization
  • Request throttling
  • Usage plans and quotas
  • API Keys
  • AWS WAF integration
  • Web socket support
  • CORS support
  • Built-in custom domain support

Function URLs

  • AWS_IAM authentication
  • CORS support

So we have lots of missing stuff in function URLs. The question is, do we need all of them for all our APIs? The chances are that a single public endpoint is good enough for many use-cases, and it costs a lot less.

Using Function URLs with Aliases

We can also have different URLs for different aliases. To demonstrate this feature, I will publish the code above that returns the path and query string as a new version:

Switch to the Versions tab and click Publish new version.

In the dialog box, give a meaningful description and click Publish

Create new version

Then switch to the Aliases tab, create an alias, and click Save.

Create new alias

Create new URL for alias

Now you can create a new function URL for this alias. While inside the new alias, click on Create function URL button and follow the same steps you did with the main function (with NONE as authentication type).

If you go to the alias function URL section now, you should be able to see the new URL, which is entirely different from the main function URL (which points to the $LATEST version automatically):

New URL for alias

CORS

You can also enable CORS to restrict where your API can be consumed. For example, you can set your domain where your frontend lives and only accept POST requests:

CORS settings

Part 3: Clean Up

If you followed the steps of this post, you might want to clean up the resources you created. You can do so by following the steps below:

  • Delete the Lambda function (PublicUrlDemo)
  • Delete IAM policy (lambda-invoke-function-url-policy)
  • Delete IAM user (lambda-invoke-function-url-demo-user)

Conclusion

I needed a simple API to carry out a simple task on multiple occasions. My go-to service was always Lambda, but the Lambda function could only be called from within AWS services or command line using authenticated users was a deal-breaker in most scenarios. Now that we have the ability to develop APIs that can be invoked from anywhere is a significant improvement on the server overall.

Resources

aws codeartifact, nuget

CodeArtifact is the software artifact repository service provided by AWS. In the previous article, you covered the basics of NuGet - Microsoft’s package management platform for .NET applications. You looked into the basics of package consumption using NuGet.org (the default package repository). In this article, you will learn more about creating and publishing your packages to your private repository using AWS CodeArtifact.

Prerequisites

Why host your own NuGet Repository?

You can create a free account on NuGet Gallery and publish your packages freely. So why would you want to host your own repository instead of this option?

Privacy

When you publish your packages to the NuGet gallery (or any other public repository), you instantly expose your code to the world. Anyone can download and examine your code (it may require decompiling .NET assemblies to code but it’s still possible). This may be your intention in the first place, especially if you’re working on an open-source project, but if you want to keep your code private, having your own private repository would be the preferred option.

Access Control

When you have your own private repository, you can control who can access your code. You can also host it in isolated networks so your automation can work without Internet access.

Security

When you publish your packages on a public repository and download from there, your system’s security is tied to that remote system’s security. Any package you download can be compromised and risk your entire system’s security.

Auditing

If you only allow your applications and developers to download packages from pre-approved and audited locations, you won’t have to worry about rogue packages sneaking in.

CodeArtifact Basics

When working with AWS CodeArtifact, you need to be familiar with 2 concepts:

  • Domains

  • Repositories

Domain

A domain is a logical unit for grouping repositories. In most cases, you would need only one domain for your company or team. The idea of having an artifact repository is to share code between projects. So if you lock down your repositories too much you might end up having constant access issues or duplicating packages.

Repository

A repository contains software artifacts such as packages, libraries, and scripts that are stored in a centralized manner and meant to be shared among projects.

Set Up CodeArtifact

Go to AWS CodeArtifact in AWS Management Console.

This article uses the us-east-1 (N.Virginia) region. Feel free to use whatever AWS region you prefer. Make sure to check the CodeArtifact pricing page too although the difference between regions should be negligible for the purposes of the demo project.

Click the Create domain button.

Screenshot showing the Create domain button in the Domains page

Enter the name of your domain. It doesn’t have to be too specific. you will create repositories later which should be more specific to the project but the domain can be more generic such as your company name.

Screenshot showing the domain name entered as cloud-experiments and the Create domain button to finish setup.

AWS CodeArtifact uses the domain name and your account ID to generate a unique domain URL. You will use this when pulling and pushing packages.

Then go to Repositories and click the Create repository button.

Screenshot showing the Create repository button in the Repositories page

The repository name can be the name of your project or something more generic such as shared or common. In this example, I’ll use common. You can give it a description too to explain the purpose.

Screenshot showing the repository settings

You can also select an upstream repository (nuget-store for .NET which uses NuGet.org). Public upstream repositories are intermediate repositories to download packages from the public repository and cache them. This allows the user to automatically use the missing packages from the private repository. In this example, leave the public upstream repository empty and click Next.

Next, you choose the domain this repository will belong to. Each repository is part of a domain. You can use cross-account domains as well by selecting the Different AWS account option. In this example, select your own account and select the domain you created in the previous step.

Screenshot showing the AWS account and domain selection

Review your selection in the next screen and click Create repository to finish the setup.

If you click on the repository details, you should see the details such as the ARN of the repository and the domain it belongs to. So far, it’s just a generic repository as you haven’t specified anything about .NET. Next, you will set up the .NET client connection.

Set Up Client Connection

In the packages section, you can see the View connection instructions button. This is very useful for setting up your client.

Screenshot showing the packages list and View connection instructions button.

AWS CodeArtifact supports various package managers such as Maven, Python, npm and NuGet. For this example, select .NET from the list. NuGet is more suitable for Windows-based legacy .NET Framework projects. .NET CLI works best if you’re using a Mac or Linux for a .NET Core or later project.

In the next section, 3 options are provided. I recommend using the manual setup. This gives you more control over how you set things up. It’s likely you will need to have this kind of setup in your CI/CD pipeline as well to be able to restore packages during the build process. So it’s a good practice to be able to replicate these steps later.

The next step is to obtain the auth token. The guide shows the command suitable for your platform. On Mac, it should look something like this:

export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain cloud-experiments --domain-owner {YOUR AWS ACCOUNT ID} --region us-east-1 --query authorizationToken --output text`

Run the command and assuming your default profile credentials are valid and have permission to carry out the request, then it should obtain the auth code and assign it to the CODEARTIFACT_AUTH_TOKEN variable that is valid for 12 hours.

The final step is to add your repository to your repository list by running the following command:

dotnet nuget add source "https://{YOUR DOMAIN NAME}-{YOUR AWS ACCOUNT ID}.d.codeartifact.us-east-1.amazonaws.com/nuget/common/v3/index.json" -n "{YOUR DOMAIN NAME}/{YOUR REPO NAME}" -u "aws" -p "${CODEARTIFACT_AUTH_TOKEN}" --store-password-in-clear-text

The store-password-in-clear-text flag is only required on non-Windows platforms as encryption is not supported.

Once you’ve run the command, you should see a confirmation on your terminal saying the repo is added successfully.

To verify your setup, run the following command:

dotnet nuget list source

You should see both the default NuGet.org and your private repository:

Terminal window showing the registered NuGet sources including the private CodeArtifact repository

Publish Package to Your Repository

Now that you have a private NuGet repository of your own, take advantage of it and publish your first package.

Create a new .NET class library by running the following command:

mkdir CodeArtifactBasics
cd CodeArtifactBasics
dotnet new classlib

Open the project in your IDE and rename Class1.cs to Calculator.cs and update the contents with below:

namespace CodeArtifactBasics;

public class Calculator
{
    public int Add(int x, int y)
    {
        return x + y;
    }
}

Run the following command to create a NuGet package for your project:

dotnet pack

You should have version 1.0.0 of your package created under the bin/Debug directory. The final step is to push it to your CodeArtifact repository by running the following command:

dotnet nuget push ./bin/Debug/CodeArtifactBasics.1.0.0.nupkg --source {YOUR DOMAIN}/{YOUR REPO}

If you refresh your package list on the CodeArtifact page, you should now see your newly published package:

Screenshot showing the newly published package in the packages list

Consume Packages From Your Private Repository

So your calculator library is in your package repository available to be consumed in your projects. Create a consumer project by running the following command:

dotnet new console --name CalculatorClient

Then, open the project in your IDE.

To consume the NuGet package, run the following command in a terminal at the root of the new console project:

dotnet add package codeartifactbasics

If you take a look at the command output, you should notice that it queries both nuget.org and your private repository:

Terminal window showing the requests for downloading the NuGet package

You can see the 2 GET requests sent to both repositories. Since there is no codeartifactbasics package in nuget.org, it returns a NotFound response while your CodeArtifact repository returns a successful result. It then goes ahead and fetches the actual package and adds to your project.

You can check the package in your IDE as well. It depends on the IDE but in Rider it looks like this:

Window showing the CodeArtifactBasics package added to the CalculatorClient project.

You can now update the Program.cs with the following code and you should be able to run your program without any issues:

using CodeArtifactBasics;

var calc = new Calculator();
var result = calc.Add(1, 2);
Console.WriteLine(result);

Conclusion

In this article, you covered the basics of AWS’s package management service. You learned how to create a new private repository, authenticate against the repo and publish and consume packages. In later articles, we will dive deeper into the NuGet protocol and CodeArtifact.

Resources

dev nuget, dotnet

NuGet is a package manager for the Microsoft development environment. Nowadays the requirements for the applications are more complex than ever. It’s very hard for a development team to implement all the code used in a complex system, especially the generic utilities which are common for all applications and not specific to the application domain. This is where consuming already developed packages comes in very helpful.

What’s a NuGet package?

A Nuget package is a zip file, with .nupkg extension, that contains compiled code, other related files and some descriptive metadata. This metadata contains the package version, author info and some other optional information. You’ll learn more about it later in this article.

Why use the NuGet package manager?

As stated before, it’s next to impossible to develop a mid to large-sized application without using any external packages. To manage the packages you consume, you will need a package manager.

The benefits of using NuGet package manager can be summarized as below:

Easy Package Management

NuGet simplifies the process of managing third-party libraries and tools in your projects. It provides a centralized repository where you can discover, download, and install packages with just a few clicks.

Dependency Management

NuGet automatically manages dependencies between packages. When you install a package, NuGet ensures that any other packages it depends on are also installed. This helps prevent version conflicts and ensures that your project uses compatible versions of libraries.

Versioning

NuGet packages are versioned, allowing you to specify the exact version of a package that your project needs. This helps maintain consistency across development, testing, and production environments.

Integration with Visual Studio and other popular IDEs

NuGet is tightly integrated with Visual Studio, which is the primary integrated development environment (IDE) for .NET development. This integration makes it easy to manage packages directly from the Visual Studio IDE.

Command-Line Interface (CLI)

In addition to the Visual Studio integration, NuGet provides a command-line interface (CLI) for those who prefer working in a console environment. This flexibility allows developers to incorporate package management into their build scripts and automation processes.

Community Contributions

NuGet has a large and active community of developers who contribute packages to the NuGet Gallery. This means you can easily access a wide range of libraries and tools created by others, saving you from reinventing the wheel.

NuGet Gallery

The NuGet Gallery serves as a central repository for NuGet packages, making it a convenient hub for sharing and discovering packages. You can find packages for various purposes and from different authors in one place.

Package Restoration

NuGet simplifies the process of restoring packages in a project. When you open a project on a new machine or a different environment, NuGet can automatically download and install the required packages, reducing setup time.

NuGet.org

NuGet.org, the official NuGet package source, is a reliable and well-maintained repository. It ensures that the packages you download are trustworthy and have passed certain quality standards.

How to install NuGet?

There are a number of ways to use NuGet. For legacy projects (.NET Framework projects) you can download Nuget CLI directly. For more info, check out how to Install NuGet Client tools.

dotnet CLI used to have its own repository, but nowadays, it’s part of the .NET SDK. You can download the dotnet framework here as well.

If you already have Visual Studio or JetBrains Rider installed on your system you should have the required CLIs installed already. This article will use the dotnet CLI and dotnet 7 in the demo project.

To test your CLI installation, run the following command:

dotnet --version

You should see the dotnet version printed on your terminal window.

Demo Project Setup

To follow along and practice using NuGet, it’s advised you follow the steps below to create your own playground. Alternatively, you can clone the accompanying GitHub repository to get the final code:

git clone https://github.com/volkanpaksoy/public-source-code --branch blog/nuget-basics

Run the following snippet to create the project:

mkdir NuGetBasics
cd NuGetBasics
dotnet new console

NuGet Package Metadata

The metadata in a NuGet package is stored in an XML file with .nuspec extension. The mandatory elements are:

<id></id>
<version></version>
<description></description>
<authors></authors>

You can also also projectUrl, license, icon etc. you can find the full list of elements here.

To see the .nuspec file in action, run the following command in your terminal while in the same directory as the .csproj file:

dotnet pack

You should see an output like this:

The output of dotnet pack command

Then run the following command to check the contents of the bin/Debug folder:

ls -la ./bin/Debug

Terminal window showing the contents of bin/Debug directory

As stated before, .nupkg is simply a zip file. You can change the extension and unpack by running the following commands:

mv ./bin/Debug/NuGetBasics.1.0.0.nupkg NuGetBasics.zip
unzip NuGetBasics.zip -d package-contents
ls -la ./package-contents

You should see an output like this:

Terminal window showing the commands to rename the .nupkg file to a zip file, extract it and list the contents

You can see the NugetBasics.nuspec file listed in the directory.

Open the file in a text editor and it should look like this:

Contents of .nuspec file showing the required elements

When you run the dotnet pack command, it automatically populates the required XML elements with default values.

Instead of packaging your output with an extra command like this, you can specify to create a package with every build. To achieve this, edit your .csproj file and add the following line in the PropertyGroup element:

<GeneratePackageOnBuild>true</GeneratePackageOnBuild>

How to consume NuGet packages

In the demo project, you want to get the path of a JSON file, read it, parse it and display it in the console window. To achieve this, update Program.cs with the following code:

var filepath = args[0];

using (StreamReader file = File.OpenText(filepath))
using (JsonTextReader reader = new JsonTextReader(file))
{
    JObject obj = (JObject)JToken.ReadFrom(reader);
    Console.WriteLine(obj);
}

This is just a sample code taken from the Newtonsoft.Json package’s website. It’s probably one of the most popular Nuget packages that has been used in lots of different projects. Instead of reinventing the wheel, you would naturally want to leverage using this library and quickly implement your own solution. To be able to use this library, first, you need to add it to your project by running the following command:

dotnet add package Newtonsoft.Json

Next, add the references to the library on top of the Program.cs file as shown below:

using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

Now, you can run your application by providing a path to a JSON file (you can find a sample in the GitHub repo at this path)

The output should look like this:

Terminal window showing the output of a sample JSON file

By running a single command you now have all this JSON parsing functionality available to you in your application for free.

If you open the .csproj file, you should see the reference to the package:

The version number in your project may differ based on when you are downloading the package.

To update the package, all you have to to is run the same command (dotnet add package). If the package already exists and there is a newer version, it automatically downloads the latest and updates the reference in the project file.

If you want to remove a reference to a package, you can run the following command:

dotnet remove package Newtonsoft.Json

Conclusion

In this article, you looked into the main benefits of package management and the basics of NuGet packages. You also covered how to use packages using a demo application. There is a lot more involved with packages such as creating and publishing your own packages, hosting your own private repositories etc. which will be covered in future articles.

Resources