I use Google Cloud Functions with Go. However, I upgraded from Go 1.11 to Go 1.13 (because Go 1.11 is being deprecated) and ran into some annoying, undocumented issues.
Static Files And The Current Working Directory
One of my Cloud Functions acts as a tiny web server; it has a few static HTML files that it serves in addition to its dynamic things.
In Go 1.11, Cloud Functions put the static files (and all the source files, for that matter) in the working directory of the function. This (1) makes sense, and (2) makes testing easy.
However, in Go 1.13, Cloud Functions puts the static files (and all of the source files) is placed in the ./serverless_function_source_code directory. Why? Who knows. All that mattered is that after a simple version upgrade, all of my stuff broke because it couldn't find files that it was able to find before the upgrade.
I found that using a sync.Once to attempt to change the current working directory (if necessary) is a fairly clean backward-compatible way of handling this issue.
Here's an example; it's fairly verbose, but you could rip out most of the logging if you don't want or need it.
// GoogleCloudFunctionSourceDirectory is where Google Cloud will put the source code that was uploaded.
//
const GoogleCloudFunctionSourceDirectory = "serverless_function_source_code"
// once is an object that will only execute its function one time.
//
// Because we want to log during our initialization, we need to handle this in a non-standard
// function and keep track of our initialization status.
var once sync.Once
// Initialize initializes the application.
//
// Primarily, this changes the current working directory.
func Initialize(log *logrus.Logger) {
log.Infof("Initializing the application.")
path, err := os.Getwd()
if err != nil {
log.Warnf("Could not find the current working directory: %v", err)
}
log.Infof("Current working directory: %s", path)
log.Infof("Looking for top-level source directory: %s", GoogleCloudFunctionSourceDirectory)
fileInfo, err := os.Stat(GoogleCloudFunctionSourceDirectory)
if err == nil && fileInfo.IsDir() {
log.Infof("Found top-level source directory: %s", GoogleCloudFunctionSourceDirectory)
err = os.Chdir(GoogleCloudFunctionSourceDirectory)
if err != nil {
log.Warnf("Could not change to directory %q: %v", GoogleCloudFunctionSourceDirectory, err)
}
}
log.Infof("Initialization complete.")
}
// CloudFunction is an HTTP Cloud Function with a request parameter.
func CloudFunction(w http.ResponseWriter, r *http.Request) {
log := logrus.New()
// Initialize our application if we haven't already.
once.Do(func() { Initialize(log) })
// YOUR CLOUD FUNCTION LOGIC HERE
}
For more information, see the Cloud Functions concepts docs.
Logging And Environment Variables
For whatever reason, Cloud Functions with Go don't log at anything other than the "default" log level; this means that all of my carefully crafted log messages all just get dumped into the logs at the same severity.
I've been using gcfhook with logrus to get around this, but it's not an ideal solution. That combination works by nullifying all output of the application and then adding a logrus hook that connects to the StackDriver API to send proper logs over the network. It works fine, but it's silly to have to make a network connection to a logging API when the application itself can output directly.
As of Go 1.13, Cloud Functions will no longer set the FUNCTION_NAME, FUNCTION_REGION, and GCP_PROJECT environment variables. This is a problem because we need those three pieces of information in order to use the StackDriver API to send the log messages. You could publish those environment variables back as part of your deployment, but I'd prefer not to.
Fortunately, Cloud Functions can now parse (poorly documented) JSON-formatted lines from stdout and stderr, resulting in proper log messages with severities. The Cloud Functions docs refer to this as "structured logging", but the docs don't seem to apply correctly. Cloud Run has a document on how these JSON-formatted lines should look, but it's still a bit hazy.
Anyway, the gcfstructuredlogformatter package introduces a logrus formatter that outputs JSON instead of plain text for logs. This eliminates the need for the extra environment variables and generally simplifies the logging workflow. It should only be a couple of lines of code to sub out gcfhook for gcfstructuredlogformatter.
Here's an example:
// CloudFunction is an HTTP Cloud Function with a request parameter.
func CloudFunction(w http.ResponseWriter, r *http.Request) {
log := logrus.New()
if value := os.Getenv("FUNCTION_TARGET"); value == "" {
log.Infof("FUNCTION_TARGET is not set; falling back to normal logging.")
} else {
formatter := gcfstructuredlogformatter.New()
log.SetFormatter(formatter)
}
log.Infof("This is an info message.")
log.Warnf("This is a warning message.")
log.Errorf("This is an error message.")
// YOUR CLOUD FUNCTION LOGIC HERE
}
Hopefully this stopped you from banging your head against the wall for a few hours like I was doing as I tried to frantically figure out why the upgrade had failed in such weird ways.