Prosecution Insights
Last updated: April 19, 2026
Application No. 18/826,485

METHOD AND SYSTEM FOR ADJUSTING A STITCHING SEAM FOR SURROUND VIEW STITCHING

Non-Final OA §103§112
Filed
Sep 06, 2024
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
VIA TECHNOLOGIES, INC.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103 §112
7DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 18 and 19 are objected to because of the following informalities: Claim 18 recites the limitation “so that the weight of the first fisheye image”. It should read “so that a weight of the first fisheye image”. Claim 19 recites the limitation “so that the weight of the second fisheye image”. It should read “so that a weight of the second fisheye image”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 26 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Regarding Claim 26, the claim recited “wherein the step of adjusting the stitching seam comprises the step of adjusting the adjustment direction according to a distance between the target object and the first fisheye-lens camera and a distance between the target object and the second fisheye-lens camera in the stitching image”. The recited claim language is almost identical with the limitations “calculating an adjustment direction of the stitching seam according to a distance between the target object and the first fisheye-lens camera and a distance between the target object and the second fisheye-lens camera in the stitching image; and adjusting the stitching seam according to the adjusting direction ” recited in claim 24. Claim 26 by virtue of its dependency failed to further limit the scope of the claim 24, thus rendering a 112(d) rejection as being of improper dependent form. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 27 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 27 recites the limitation "increasing the slope of the stitching seam by a predetermined increment if the second distance is less than the first distance so that the weight of the second fisheye image in the stitching image is increased; and decreasing the slope of the stitching seam by the predetermined increment if the second distance is larger than the first distance so that the weight of the first fisheye image in the stitching image is increased;” Claim 27 depends on claim 26 which depends on claim 24. There are 2 distances being claim in claim 26 and claim 24 : a) the distance between the target object and the first fisheye-lens camera and b) the distance between the target object and the second fisheye-lens camera. It is not clear whether the first distance in claim 27 is referring to the distance between the target object and the first fisheye-lens camera or the distance between the target object and the second fisheye-lens camera. It is also not clear whether the second distance in claim 27 is referring to the distance between the target object and the first fisheye-lens camera or the distance between the target object and the second fisheye-lens camera. The prior art analysis is not conducted at this time because the deficiencies under the 35 U.S.C. 112 must first be address for claims 26-27. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 12-17, 23-25 and 28-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over REN et al. (US 12346995 B2), referred herein as REN in view of Bichu et al. (US 20210082086 A1), referred herein as Bichu. Regarding Claim 1, REN in view of Bichu teaches a method for adjusting a stitching seam for surround view stitching, comprising (REN [0010] relate to surround view or environment visualization, dynamic seam placement based on object saliency, dynamic seam placement based ego-object state, an adaptive 3D bowl that models the surrounding environment; [0016] The present systems and methods for surround view or environment visualization; [0162] The methods may also be embodied as computer-usable instructions stored on computer storage media): receiving a first fisheye image from a first fisheye-lens camera and a second fisheye image from a second fisheye-lens camera (REN [0179] In FIG. 13, the input images 1310 are four fisheye images captured by fisheye cameras located at the front, left, rear and right sides of a vehicle body); detecting a target object in the first fisheye image and the second fisheye image (REN [0065] At a high level, objects (e.g., salient objects) may be detected from sensor data (e.g., fisheye images) representing an environment surrounding an ego-object such as a vehicle; [0067] to identify detected objects and/or salient regions, object detection may be performed on multiple images of an environment to create multiple object and/or saliency masks); projecting the first fisheye image, the second fisheye image, and the target object to a stitching image (REN [0011] The images may be aligned to create an aligned composite image or surface (e.g., a panorama, a 360° image, bowl shaped surface) with overlapping regions of image data, and a representation of the detected objects and/or salient regions (e.g., a saliency mask) may be generated and projected onto the aligned composite image or surface. Seams may be positioned in the overlapping regions to avoid or minimize crossing salient pixels represented in the projected masks, and the image data may be blended at the seams to create a stitched image or surface (e.g., a stitched panorama, stitched 360° image, stitched textured surface)); wherein the stitching image comprises the stitching seam and the target object appears on a seam region of the stitching image (REN [0166] The method 1000, at block B1008 includes updating the candidate position for the seam to an updated position based at least on an intersection of the seam at the candidate position with pixels of the detected objects in one or more of the two or more projected masks); calculating an adjustment direction of the stitching seam according to a distance between the target object and the first REN[0165] The method 1000, at block B1006 includes determining a candidate position for a seam in an overlapping region of the two or more aligned image frame; [0206] The method 2100, at block B2102, includes determining a distance from an ego-object to a detected object in an environment. For example, with respect to FIG. 16, the 3D object detector 1620 may perform 3D object detection on sensor data captured by the ego-object (e.g., the input images 1610, corresponding RADAR or LiDAR data), and the radial distance mapper 1630 may compute distance(s) to the detected objects (e.g., the closest detected object in a direction corresponding to each angular increment); [0191] Based on the distances and directions between the ego-object 1640 and the detected objects 1645, the 3D bowl parameter controller 1650 may adapt the shape of a 3D bowl modeling the surrounding environment based on the distances and directions to the detected objects 1645); and REN teaches the distances between the detected object and the ego-object, but does not explicitly teach between the first and second fisheye cameras. However, Bichu teaches distance between object and fisheye cameras (Bichu [0189] …in order to correctly align every point in the two images including points that capture objects at different distances from the camera/lens; [0169] projecting an image captured with a left fisheye camera/lens and the right sphere at 1530 of FIG. 15 obtained from projecting an image captured with a right fisheye camera/lens). adjusting the stitching seam according to the adjustment direction (REN [0166] The method 1000, at block B1008 includes updating the candidate position for the seam to an updated position based at least on an intersection of the seam at the candidate position with pixels of the detected objects in one or more of the two or more projected masks). Bichu discloses an image stitching technique including depth or disparity estimation, alignment, and blending processes. Bichu is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified REN to incorporate the teachings of Bichu, and apply the distances between capture objects and fisheye cameras into the systems and methods for image stitching with dynamic seam placement based on ego-vehicle state for surround view visualization. Doing so would provide an improved and optimized image calibration and rendering techniques that may both improve output image quality for viewing and reduce the image processing time, which may facilitate the real-time output of tiled composite panoramic images. Regarding Claim 2, REN in view of Bichu teaches the method for adjusting the stitching seam as claimed in claim 1, and further teaches further comprising: determining whether the stitching seam has reached a limit value (REN [0071] a state machine implementing a decision tree is used to determine whether to use a default seam placement or dynamic seam placement that avoids salient objects or regions, and to enable and disable dynamic seam placement based on speed of ego-motion, direction of ego-motion, proximity to salient objects, active viewport direction, driver gaze, and/or other factors); wherein the limit value is associated with extrinsic parameters of the first fisheye-lens camera and the second fisheye-lens camera (REN [0118] each camera may have various intrinsic or extrinsic values that can impact the appearance of a captured image, where those values may relate to field of view, optical center, focal length, or camera pose, among other such options; [0133] a seam may be placed horizontally to present a better (less disrupted) forward facing view to the driver. However, when driving at low speeds and/or whenever an object (e.g., another vehicle) passes by closely (e.g., within a threshold distance), the driver may need to pay closer attention to the object passing by). The threshold distance is calculated from extrinsic parameters of a camera. Regarding Claim 3, REN in view of Bichu teaches the method for adjusting the stitching seam as claimed in claim 2, and further teaches further comprising: determining that the stitching seam has reached the limit value; not adjusting the stitching seam (REN [0142] If the distance to the closest object is more than the threshold proximity, the dynamic seam toggling state machine 601 disables dynamic seam placement at block 625 and uses a default seam placement instead (e.g., horizontal or other default values, such as one that minimizes seam length visible in a particular viewport)); and continuously receiving a third fisheye image from the first fisheye-lens camera and a fourth fisheye image from the second fisheye-lens camera (REN [0142] Returning to block 610, when ego-speed is within some medium speed range (e.g., from 5-16 km/hr), the dynamic seam toggling state machine 601 disables dynamic seam placement in favor of a default seam (e.g., a horizontal seam); 0143] Returning to block 615, distances to surrounding objects may be obtained or determined in various ways. In some embodiments, 3D object detection is performed (e.g., by processing sensor data) and/or a representation of detected 3D objects (e.g., 3D cuboids in rig coordinates) is accessed. For example, distance to objects may be computed using depth or stereo camera arrays). Regarding Claim 4, REN in view of Bichu teaches the method for adjusting the stitching seam as claimed in claim 2, and further teaches further comprising: determining that the stitching seam has not reached the limit value (REN [0142] If the distance to the closest object is less than some threshold proximity (e.g., less than 3 m), the dynamic seam toggling state machine 601 enables dynamic seam placement at block 620.); and adjusting the stitching seam according to the adjustment direction (REN [0167] detected objects that are less than some length in pixels such as 500 pixels), a vertical or horizontal seam is selected when the entire mask projection being evaluated (the overlapping region) is occupied by a detected object (to avoid the object), and/or a seam is selected that crosses the least number of (remaining) object pixels). Regarding Claim 5, REN in view of Bichu teaches the method for adjusting the stitching seam as claimed in claim 1, and further teaches further comprising: retrieving a representative point of the target object according to a type of the target object (REN [0074] adapt the shape, orientation, and/or dimensions of a 3D bowl (e.g., a mesh) modeling the surrounding environment based on the distance to nearby detected objects and project image data onto the adapted 3D bowl. The present techniques may be utilized to visualize an environment around an ego-object, such as a vehicle, robot, and/or other type of object); projecting the representative point to the stitching image (REN [0075] sensor data such as a LiDAR point cloud is projected onto a top-down 2D occupancy grid that represents locations of detected objects or a 3D occupancy grid that represents locations and projected or assumed heights of detected objects (e.g., assuming vehicles have a height of 2 or 3 meters above the ground surface).); and calculating the adjustment direction of the stitching seam according to a distance between the representative point in the stitching image and the first fisheye-lens camera and a distance between the representative point in the stitching image and the second fisheye-lens camera (REN [0075] The 3D object detections and/or the 2D/3D occupancy grid may be used to compute distances to detected objects, and the distances may be populated in a radial distance map that represents distance to (e.g., a representative point(s) on) the closest detected object as a function of angle (e.g., representing a rotation around an axis of the vehicle coordinate system, such as yaw)). Regarding Claim 6, REN in view of Bichu teaches the method for adjusting the stitching seam as claimed in claim 5, and further teaches wherein the step of calculating the adjustment direction of the stitching seam according to the distance between the representative point in the stitching image and the first fisheye-lens camera and the distance between the representative point in the stitching image and the second fisheye-lens camera, comprises: calculating a first distance between the representative point and the first fisheye-lens camera; calculating a second distance between the representative point and the second fisheye-lens camera (Bichu [0189] …in order to correctly align every point in the two images including points that capture objects at different distances from the camera/lens; [0169] projecting an image captured with a left fisheye camera/lens and the right sphere at 1530 of FIG. 15 obtained from projecting an image captured with a right fisheye camera/lens); and adjusting a slope of the stitching seam according to a relationship between the first distance and the second distance (Bichu [0188] misalignment in the lenses and cameras is determined by performing a 3D calibration on the cameras/lenses such that what remains is misalignment due to parallax. In particular, image points that correspond to scene points at different distances from the cameras/lenses must be adjusted by different shifts (e.g., shifting individual pixels in a first image with respect to a second image to align points at different distances by different amounts) in order to correctly align two images that are misaligned due to parallax. Accordingly, in order to correctly align points at different distances from a camera/lens, the depth of each point is detected using a depth estimation process). Regarding Claims 12-17, REN in view of Bichu teaches a surround view stitching seam adjustment system, comprising (REN [0010] relate to surround view or environment visualization, dynamic seam placement based on object saliency, dynamic seam placement based ego-object state, an adaptive 3D bowl that models the surrounding environment; [0016] The present systems and methods for surround view or environment visualization; [0162] The methods may also be embodied as computer-usable instructions stored on computer storage media). The metes and bounds of the claim substantially correspond to the claimed limitations set forth in claims 1-6; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 23, REN in view of Bichu teaches the surround view stitching seam adjustment system as claimed in claim 12, and further teaches wherein the first and second fisheye images are obtained from a simulation platform (REN [0235] FIG. 28 shows an example under vehicle reconstruction 2830 using simulated fisheye images 2810a-d). Regarding Claims 24, 28 and 29, REN in view of Bichu teaches a method for adjusting a stitching seam for surround view stitching, comprising (REN [0010] relate to surround view or environment visualization, dynamic seam placement based on object saliency, dynamic seam placement based ego-object state, an adaptive 3D bowl that models the surrounding environment; [0016] The present systems and methods for surround view or environment visualization; [0162] The methods may also be embodied as computer-usable instructions stored on computer storage media). The metes and bounds of the claim substantially correspond to the claimed limitations set forth in claims 1-3; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 25, REN in view of Bichu teaches the method of claim 24, and further teaches wherein the target object appears in a first-level warning frame or a second-level warning frame of the stitching image (REN [0288] Cameras with a field of view that include portions of the environment to the rear of the vehicle 3500 (e.g., rear-view cameras) may be used for park assistance, surround view, rear collision warnings, and creating and updating the occupancy grid). Regarding Claim 30, REN in view of Bichu teaches the method of claim 28, and further teaches wherein the limit value is associated with an imaging-range constrained by a physically installed position, rotation angle, and the field of view (FOV) of the first fisheye-lens camera or of the second fisheye-lens camera (REN [0190] In FIG. 16, the visualization 1635 illustrates one or more values that may be stored in an example radial distance map. The visualization 1635 depicts an ego-object 1640 (corresponding to the ego-object 1627), bounding boxes of detected objects 1645 (corresponding to a set of the detected objects 1625 within a threshold range of the ego-object 1627), and distances between the ego-object 1640 and the corners of the bounding boxes of the detected objects 1645). Allowable Subject Matter Claims 7-11 and 18-22 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claims 7-11 and 18-22, REN in view of Bichu teaches the method/system of claim 1/12, but does not teach the limitation herein. Therefore, claims 7-11 and 18-22 in the context of claim 1 or 12 as a whole would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20160088287 A1 and US 20180007263 A1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month