First off, all 35mm FF sensors are stitched. No way around it. The stepper reticles FOV used in fabs are not large enough to image a FF sensor in one shot. APS sensors can be made in one shot but they hit the limits on reticle aperture; in fact most APS sensors designs are reticle limited. Cost is a MAJOR reason why the digital camera industry way back in the day focused on APS sensors since they could build them on one reticle field.
Most fabs have reticle limits of ~30x30mm. I think TSMCs stepper reticles are ~26x~28. To step a 35mm sensor you need minimum of 2 shots; the 24mm active area fits on the 28-30mm field height, and you do 2 shots, L/R halves, so there is a stitch line down the middle. Medium format sensors are minimum of 4 shots.
Fabs HATE stitching since it screws up the ‘flow’ of steps/wafer/machine etc. The stitched wafers are in the stepper longer than the other wafers in the line since they have to shot multiple shots, and the wafers behind them only need one shot etc. Most fabs will stack up the stitched work and run it al lat the same time, like maybe once per month.
The stitch line has to be processed out or it will be obvious. It is impossible not to have some type of visible disturbance at the stitch boundary. If anyone ever tells you that they don’t have stitch boundary processing they are BS’ing. The reason is, if you think about it, the stepper lens is not perfect, and at the boundary, you are imaging the reticle with the left side of the lens on one shot and the right side on the other. That fact convolved with the accuracy of positioning the reticle at the boundary creates a very small anomaly that ends up disturbing how the pixel collects and processes the photon signal. These anomalies get more difficult to deal with as pixels shrink. The 4um pixels (45mp cameras) are much harder to make than a 6-9um (12-24mp). IMHO you will never see a ~150Mp 35mm camera for this reason. Yields on this device would cause the camera to price at $10-20K and nobody could afford it. Could it be built, yes, but the stitching artifacts would be larger and more difficult to deal with, probably causing lower ISO range too (super high digital gains in the ISP would make stitch management artifacts very difficult. Besides, there is a limit to ‘practical’ MP in DSLRs. At some point users won't pay, and the file sizes get stupid. They are better off investing in auto-focus, frame rate, or some other useful photographic aspects, not just pixel count. IMHO 35mm DSLRs will stop the MP wars at the 60-90mp tier. 60mp is a 3.8um pixel; 96mp is a 3um pixel (8x x 12k). ISO and dynamic range on a 3um pixel gets tough too for Pro performance.
From a stitch management perspective every camera has to be uniquely calibrated to remove the stitch boundary anomalies. Both the dark offset and responsivity has to be processed to match both halves at the boundary. Offsets are different since the column amplifiers are also on both sides of the stitch and this can affect how the transistors are made, etc. These are very small differences, 1-2DN, at most but make huge problems at high ISO when the digital gains are high. (A “DN” (digital number) is the RAW digital number coming out the A/D converter in every column; for a 14 bit ADC, the range of DN’s is 0-16383). Responsivity is a linear gain adjustment to match the halves. Typically they would uniformly light up the sensor to ~75% full range then measure the DN values after dark subtract. Then compute a correction gain. This gain may also be done at various ISO ranges to accommodate the sensors non-linearity across the full range. These offset and gain adjustments can also be done differently from the top compared to the bottom of the sensor, due to gradients affecting offset/gain across that distance (second order affects from the stepper, for example stepper lens distortions between the left/right halves)
When making a sensor the wafer is stationary and the stepper moves to each sensor location and images that mask’s information. The stepper images all the sensors, then that mask is changed out and the process repeats after the wafer returns from the other machine steps (for example the wafer are imaged then go to an ion implanter, then come back to the stepper for the next step, which could be metal dep). There are about 20-30 different masks needed to make a sensor. In the case of stitching, it takes twice as long since the sensors are imaged in 2 halves (for 35mm) with the same number 20-30 mask stack.
The stitch line is always visible if you don’t process it out; could be very subtle, but it’s there. For very large pixels it might not be visible, but that’s probably sensors with >10-20um pixels.
With your sensor corrosion, that certainly would have helped hide the stitch by blurring the image. So, like I mentioned before, I only changed the defective glass. The stitch line will be more obvious at things like small apertures and when the lens is at certain angles of light and depending on your post processing such as sharpening.
If the stitching line is really a problem, tell the Leica repair center that over time, your stitching line has become more obvious and ask them to adjust the camera firmware to process it out.