Honestly, without knowing the intended/anticipated viewing distance, it is difficult to answer this question.
There are two aspects to increasing print size when the original pixels per inch for a digital file is low. One is optimizing the input (digital source file) and the other optimizing the output data for the printer and media .
One aspect of input optimization minimizes artifacts caused by using discrete data to model analog information. Three types of errors affect this process.
o Intensity quantization - Not enough intensity resolution
o Spatial aliasing - Not enough spatial resolution
o Temporal aliasing - Not enough temporal resolution; this applies to display monitors only
Post-production software addresses these issues.
Spatial aliasing artifacts are minimized by capture sharpening. Creative sharpening involves selective, local adjustments to optimize perceived image aesthetics. Output sharpening optimizes the file for a specific printer/paper combination.
Intensity resolution is addressed by making estimates about information we did not collect. Since we can't create information (image detail) out of nothing, all software uses models to guess about the information we wish we had. Algorithms add image intensity estimates for samples (pixels) in-between samples that contain data.
In all cases the errors are distributed among the pixels. More sophisticated software offers more flexibility in error distribution strategies.
The optimum model for estimating the missing information will be different for different situations.
For an empty region of blue sky or even a sky with clouds the modeled information is very similar to the actual (but missing) information. However, if there is a squadron of aircraft flying in formation, modeling the missing - but desired information - in that region region becomes tricky.
In your case it sounds like retaining the perceived aesthetics of film grain rendering is a priority. In other words, the information content of the original data is low. This should make the job easier.
Most of the methods and tutorials assume the original, sparse data is digital. But your original information is analog.
Assuming the image scan was optimized for a grainy negative, the issue isn't estimating missing intensity detail samples out of nothing. The detail level in a grainy negative is low (in terms of intensity quantization). The issue is modeling the perceived aesthetics of the image grain for the missing samples.
Finally printers use proprietary software drivers to increase output samples (pixels) to achieve a desired DPI. The optimum parameters depend on the printer hardware, the ink (if ink is used) and output media. Often it is best to let the printer decide how to add pixels once the input file artifacts are minimized. I have used test strips to save time and money. You may find a some increase in input file PPI works well with the printer driver's automated increase in DPI.