I've been struggling with this on an internal package at work. Here's what I've settled on:
storing user credentials in environment variables so that scripts can pull username and pwd without ever storing those in scripts.
I then have the functions that set up the connections which the user has to assign to a connection. e.g.:
pwd <- Sys.getenv('YOUR_PWD')
uid <- Sys.getenv('YOUR_UID')
con <- connect_our_db(uid, pwd)
connect_our_db has logic to figure out if it's on win, osx, or linux and choose the driver appropriately. It reads username and pwd from environment variables and barfs with a meaningful message if they are not there.
then each function that uses a connection depends on the connection being passed to it. e.g.:
out_table <- my_magic_function(con, param)
When I initially started the project I was doing stuff like setting a con object in the global environment from within functions. Then I would call the global con object from within a function without passing the object as a parameter. This approach blew up in my face when I wanted to start using R Studio's new "source as job" feature. It did not play well with my non-functional style. I secretly knew that calling into the global environment from within functions was a bad idea, I had just never had it bite me. So I refactored my code so that functions ONLY ever interact with things explicitly passed to them. This made everything so much easier to maintain and debug.
so basically I'm saying it's a good idea to do what @c_12345 felt was less than ideal:
(1) for every function in the package have a con argument where the user passes the connection (this seems less than ideal for anyone using the package)
one problem you may run into is that your code is being used along with other code and other connections. It's a really fragile arrangement to make your code implicitly dependent on an object name in the global environment.