DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/11/2026 has been entered.
AMENDMENT
Applicant/s submitted arguments and remarks on 02/11/2026. The Examiner acknowledges the arguments and reviewed the claims accordingly.
Applicant/s amended claims 1 – 4, 7, 11 – 14, 16 and 18. Claims 1 - 20 are currently pending.
Response to Arguments
In regards to Argument 1, with respect to the rejection of claims 1, 2, 6, 7, 10 – 12, 15 – 18 and 20 under 35 U.S.C. 103, the applicant/s states that independent claims 1, 11 and 16 have been amended. The applicant/s further states that Banerjee and Das fail to teach the amended features of the independent claims. Therefore, the applicant/s requests the withdrawal of rejection of claims under 35 U.S.C. 103. (See Arguments/Remarks, page 10 of 11, dated 02/11/2026)
In response to Argument 1, with respect to the rejection of claims 1, 2, 6, 7, 10 – 12, 15 – 18 and 20 under 35 U.S.C. 103, the Examiner states that the applicant/s arguments have been fully considered but are rendered moot as the amendments made to the independent claims has changed the scope of the claims. The Examiner further states that the following new grounds of rejections have been necessitated by the amendments made to the claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 3, 5 – 7, 10 – 13, 15 – 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gorur Sheshagiri et al. (US 20180253875 A1; hereafter referred to as Sheshagiri) in view of Miao et al. (See Machine Translation CN 107888894 A).
Regarding Claim 1, Sheshagiri teaches:
A method comprising:
generating, using sensor data corresponding to a first time slice and generated based at least on one or more sensors of an ego-machine, two or more image frames representative of two or more overlapping viewpoints around the ego-machine in an environment (Sheshagiri, [0050] “the electronic device 102 may include a camera software application and/or a display 132. When the camera application is running, images of scenes and/or objects that are located within the field of view of the optical system(s) 106 may be captured by the image sensor(s) 104. The images that are being captured by the image sensor(s) 104 may be presented on the display 132. In some configurations, these images may be displayed in rapid succession at a relatively high frame rate so that, at any given moment in time, the objects that are located within the field of view of the optical system 106 are presented on the display 132. The one or more images (e.g., wide-angle image(s), normal image(s), telephoto image(s), etc.) obtained by the electronic device 102 may be one or more video frames, one or more still images, and/or one or more burst frames, etc.”; Sheshagiri, [0035] “Techniques for scene analysis may include disparity vectors in two overlapping image regions, image motion in an overlapping region, and/or object detection (e.g., face detection) in an overlapping region”).
determining, based at least on a state of motion of the ego-machine, whether to use a default placement for a seam or a dynamic placement to avoid placing the seam in a salient region (Sheshagiri, Sheshagiri, [0056] “The processor 112 may include and/or implement a stitching scheme selector 118. The stitching scheme selector 118 may select a stitching scheme 124 for stitching at least two images… the stitching scheme selector 118 may select a stitching scheme 124 from a set of stitching schemes (e.g., two or more stitching schemes) based on one or more content measures”; Sheshagiri, [0057] “the stitching scheme selector 118 may include a content analyzer 120. The content analyzer 120 may analyze the content of one or more images to determine one or more content measures. Examples of content measures may include a motion measure”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data…the motion measure may indicate an amount of movement (e.g., rotation, translation, etc.) of the electronic device 102”), whether to use a default placement for a seam or a dynamic placement (Sheshagiri, [0063] “The stitching scheme selector 118 may select one or more stitching schemes 124 based on the one or more content measures. For example, if the motion measure is greater than a motion threshold, the stitching scheme selector 118 may select static seam-based stitching… If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching.”) to avoid placing the seam in a salient region (Sheshagiri, [0078] “systems and methods disclosed herein may beneficially avoid parallax errors by selecting an appropriate stitching scheme”; Sheshagiri, [0090] “The dynamic seam determiner 764 may determine a seam that avoids going (e.g., cutting, crossing, etc.) through foreground regions (e.g., objects in the foreground of an image)”; Sheshagiri, Fig. 11, [0101] “A dynamic seam 1109 may be determined in order to avoid crossing an object 1105b (e.g., a foreground object, a foreground region, etc.)”);
generating a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Sheshagiri, [0071] “The image stitcher 122 may stitch the images based on one or more selected stitching schemes 124. For example, the image stitcher 122 may stitch the images in accordance with the one or more stitching schemes 124 selected by the stitching scheme selector 118. In some approaches, the image stitcher 122 may utilize multiple stitching schemes 124, each for stitching one or more areas (e.g., an area of an overlapping region, a partitioned area, a sub-region, etc.) of the images. Examples of stitching schemes 124 may include static seam-based stitching, dynamic seam-based stitching, and dynamic warp-based stitching”; Sheshagiri, [0076] “The electronic device 102 may stitch 206 the at least two images based on a selected stitching scheme…FIG. 1. the electronic device 102 may perform image stitching (e.g., image fusion, combining, compositing, etc.) on the images with one or more selected stitching schemes”; Sheshagiri, [0066] “Blending the images may produce a blended output, which may be a weighted combination of the images (e.g., two input images)”).
While Sheshagiri teaches determining selecting the stitching scheme based on the content measure wherein the content measure may include the motion measure which indicates an amount of movement (e.g., rotation, translation, etc.) of the electronic device, it fails to explicitly teach:
based at least on a state of motion of the ego-machine;
In the same field of endeavor, Miao teaches:
based at least on a state of motion of the ego-machine (Miao, page 2, S3: “S3: Rendering the 3D full scene view according to the vehicle state information acquired from the onboard sensor to form a panoramic auxiliary perspective view”; Miao, S4: “the 3D panoramic view may be adjusted according to vehicle state information, such as turning and reversing light vehicle state information”; Miao, page 5, detailed description, para 3, “on-vehicle sensor 1 is used to detect vehicle status information such as gear position information, steering information, and vehicle speed information”; Miao, page 6, para 5, “The device includes: a mapping unit 42, a seam selection and fusion unit 44, and a rendering unit 45. When the output unit 46 is specifically implemented, the mapping unit 42 maps the view information from the look-ahead camera into a preset stereoscopic environment model to form a panoramic scene view”); and
generating a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Miao, page 6, para 5 “The joint selection and fusion unit 44 selects and fuses the joints of the picture, and then the rendering unit is based on the onboard sensors. The acquired vehicle state information is rendered to a panoramic scene view to form a panoramic assisted viewing angle view; finally, a panoramic assistant viewing angle view is output through the output unit 46”; page 7, para 3, “Step S12: Selecting and blending the seams: Select the seam position in the overlapped area of the adjacent camera to fuse the textures on the two sides of the seam. The joints can be used for static joints and dynamic joints. Static joints are fixed joints in the camera coincidence area; dynamic joints are defined as non-fixed joints in the overlap area to minimize the texture difference between the two cameras within the seam width”).
Sheshagiri and Miao are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Sheshagiri with the method of using state of motion of the ego-machine as taught by Miao to make the invention that generates two or more image frames representative of two or more overlapping viewpoints around the ego-machine in an environment; determines, based at least on a state of motion of the ego-machine, whether to use a default placement for a seam or a dynamic placement to avoid placing the seam in a salient region; and generates a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam; doing so can provide rendering the whole scene according to the vehicle state information acquired from the car sensors so as to form a panoramic auxiliary angle of view (Miao, abstract) and the visual field and accuracy of the on-board visual look-around system (Miao, Summary); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 2, Sheshagiri in view of Miao teaches the method of claim 1, further comprising determining to use the dynamic placement based at least on at least one of a viewport rendering facing substantially forward in the environment or the motion being substantially forward in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 3, Sheshagiri in view of Miao teaches the method of claim 1, further comprising determining to use the dynamic placement based at least on a speed of the motion being below a threshold speed, and at least one of a viewport rendering facing substantially forward in the environment or the motion being substantially forward in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data. The motion measure may indicate an amount of movement (e.g., rotation, translation (forward/backward direction), etc.) of the electronic device 102 between images”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 5, Sheshagiri in view of Miao teaches the method of claim 1, further comprising determining to use the default placement based at least on a speed of the ego-machine being above a threshold speed (Sheshagiri, [0063] “if the motion measure is greater than a motion threshold, the stitching scheme selector 118 may select static seam-based stitching”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data”).
Regarding Claim 6, Sheshagiri in view of Miao teaches the method of claim 1, further comprising determining to use the default placement based at least on a determination that there are no detected objects depicted in the two or more image frames that are closer to the ego-machine than a threshold proximity (Sheshagiri, [0078] “high parallax error 446 occurs due to the subject being close to the camera. For instance, there is a greater disparity in the position of the subject between cameras due to the closeness of the subject. It may also be observed that the background does not exhibit as much disparity. The systems and methods disclosed herein may beneficially avoid parallax errors by selecting an appropriate stitching scheme”; Sheshagiri, [0115] “the electronic device 102 may select grid points within a threshold distance from the static seam, a proportion of grid points that are closest to the static seam, one or more sets (e.g., columns, rows, etc.) of grid points that are nearest to the static seam”);
Regarding Claim 7, Sheshagiri in view of Miao teaches the method of claim 1, further comprising determining to use the dynamic placement based at least on a viewport rendering facing substantially backwards in the environment or the motion being substantially backwards in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 10, Sheshagiri in view of Miao teaches the method of claim 1, wherein the method is performed by at least one of:
a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs);a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Sheshagiri, [0140] “The electronic device 1802 may be (or may be included within) a camera, video camcorder, digital camera, cellular phone, smart phone, computer (e.g., desktop computer, laptop computer, etc.), tablet device, media player, television, automobile, personal camera, action camera, surveillance camera, mounted camera, connected camera, robot, aircraft, drone, unmanned aerial vehicle (UAV), healthcare equipment, gaming console, personal digital assistants (PDA), set-top box, etc.”).
Regarding Claim 11, Sheshagiri teaches:
A processor comprising (Sheshagiri, [0041] “the electronic device 102 may include a processor 112,”): one or more circuits to:
generate image data of two or more image frames corresponding to two or more overlapping viewpoints of an environment (Sheshagiri, [0050] “the electronic device 102 may include a camera software application and/or a display 132. When the camera application is running, images of scenes and/or objects that are located within the field of view of the optical system(s) 106 may be captured by the image sensor(s) 104. The images that are being captured by the image sensor(s) 104 may be presented on the display 132. In some configurations, these images may be displayed in rapid succession at a relatively high frame rate so that, at any given moment in time, the objects that are located within the field of view of the optical system 106 are presented on the display 132. The one or more images (e.g., wide-angle image(s), normal image(s), telephoto image(s), etc.) obtained by the electronic device 102 may be one or more video frames, one or more still images, and/or one or more burst frames, etc.”; [0035] “Techniques for scene analysis may include disparity vectors in two overlapping image regions, image motion in an overlapping region, and/or object detection (e.g., face detection) in an overlapping region”).
select, based at least on a state of ego-motion of the ego-machine in the environment, between a default placement for a seam or dynamic placement (Sheshagiri, [0056] “The processor 112 may include and/or implement a stitching scheme selector 118. The stitching scheme selector 118 may select a stitching scheme 124 for stitching at least two images… the stitching scheme selector 118 may select a stitching scheme 124 from a set of stitching schemes (e.g., two or more stitching schemes) based on one or more content measures”; Sheshagiri, [0034] “Different approaches may be utilized for stitching. Static seam-based stitching may stitch along a fixed seam in an overlapping region… Dynamic seam-based stitching is another approach”; Sheshagiri, [0057] “the stitching scheme selector 118 may include a content analyzer 120. The content analyzer 120 may analyze the content of one or more images to determine one or more content measures. Examples of content measures may include a motion measure”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data…the motion measure may indicate an amount of movement (e.g., rotation, translation, etc.) of the electronic device 102”; Sheshagiri, [0063] “The stitching scheme selector 118 may select one or more stitching schemes 124 based on the one or more content measures. For example, if the motion measure is greater than a motion threshold, the stitching scheme selector 118 may select static seam-based stitching… If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching.”) to avoid placing the seam in a salient region (Sheshagiri, [0078] “systems and methods disclosed herein may beneficially avoid parallax errors by selecting an appropriate stitching scheme”; [0090] “The dynamic seam determiner 764 may determine a seam that avoids going (e.g., cutting, crossing, etc.) through foreground regions (e.g., objects in the foreground of an image)”; Sheshagiri, Fig. 11, [0101] “A dynamic seam 1109 may be determined in order to avoid crossing an object 1105b (e.g., a foreground object, a foreground region, etc.)”);
generate a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Sheshagiri, [0071] “The image stitcher 122 may stitch the images based on one or more selected stitching schemes 124. For example, the image stitcher 122 may stitch the images in accordance with the one or more stitching schemes 124 selected by the stitching scheme selector 118. In some approaches, the image stitcher 122 may utilize multiple stitching schemes 124, each for stitching one or more areas (e.g., an area of an overlapping region, a partitioned area, a sub-region, etc.) of the images. Examples of stitching schemes 124 may include static seam-based stitching, dynamic seam-based stitching, and dynamic warp-based stitching”; Sheshagiri, [0076] “The electronic device 102 may stitch 206 the at least two images based on a selected stitching scheme…FIG. 1. the electronic device 102 may perform image stitching (e.g., image fusion, combining, compositing, etc.) on the images with one or more selected stitching schemes”; Sheshagiri, [0066] “Blending the images may produce a blended output, which may be a weighted combination of the images (e.g., two input images)”).
While Sheshagiri teaches determining selecting the stitching scheme based on the content measure wherein the content measure may include the motion measure which indicates an amount of movement (e.g., rotation, translation, etc.) of the electronic device, it fails to explicitly teach:
based at least on a state of motion of the ego-machine in the environment;
In the same field of endeavor, Miao teaches:
based at least on a state of ego-motion of the ego-machine in the environment (Miao, page 2, S3: “S3: Rendering the 3D full scene view according to the vehicle state information acquired from the onboard sensor to form a panoramic auxiliary perspective view”; Miao, S4: “the 3D panoramic view may be adjusted according to vehicle state information, such as turning and reversing light vehicle state information”; Miao, page 5, detailed description, para 3, “on-vehicle sensor 1 is used to detect vehicle status information such as gear position information, steering information, and vehicle speed information”; Miao, page 6, para 5, “The device includes: a mapping unit 42, a seam selection and fusion unit 44, and a rendering unit 45. When the output unit 46 is specifically implemented, the mapping unit 42 maps the view information from the look-ahead camera into a preset stereoscopic environment model to form a panoramic scene view”); and
generate a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Miao, page 6, para 5 “The joint selection and fusion unit 44 selects and fuses the joints of the picture, and then the rendering unit is based on the onboard sensors. The acquired vehicle state information is rendered to a panoramic scene view to form a panoramic assisted viewing angle view; finally, a panoramic assistant viewing angle view is output through the output unit 46”; page 7, para 3, “Step S12: Selecting and blending the seams: Select the seam position in the overlapped area of the adjacent camera to fuse the textures on the two sides of the seam. The joints can be used for static joints and dynamic joints. Static joints are fixed joints in the camera coincidence area; dynamic joints are defined as non-fixed joints in the overlap area to minimize the texture difference between the two cameras within the seam width”).
Sheshagiri and Miao are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Sheshagiri with the method of using state of motion of the ego-machine as taught by Miao to make the invention that generates two or more image frames representative of two or more overlapping viewpoints around the ego-machine in an environment; determines, based at least on a state of motion of the ego-machine, whether to use a default placement for a seam or a dynamic placement to avoid placing the seam in a salient region; and generates a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam; doing so can provide rendering the whole scene according to the vehicle state information acquired from the car sensors so as to form a panoramic auxiliary angle of view (Miao, abstract) and the visual field and accuracy of the on-board visual look-around system (Miao, Summary); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 12, Sheshagiri in view of Miao teaches the processor of claim 1, the one or more circuits further to select the dynamic placement based at least on at least one of a viewport rendering facing substantially forward in the environment or the motion being substantially forward in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 13, Sheshagiri in view of Miao teaches the method of claim 1, the one or more circuits further to select the dynamic placement based at least on a speed of the motion being below a threshold speed, and at least one of a viewport rendering facing substantially forward in the environment or the motion being substantially forward in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data. The motion measure may indicate an amount of movement (e.g., rotation, translation (forward/backward direction), etc.) of the electronic device 102 between images”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 15, Sheshagiri in view of Miao teaches the processor of claim 11, wherein the processor is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs);a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Sheshagiri, [0140] “The electronic device 1802 may be (or may be included within) a camera, video camcorder, digital camera, cellular phone, smart phone, computer (e.g., desktop computer, laptop computer, etc.), tablet device, media player, television, automobile, personal camera, action camera, surveillance camera, mounted camera, connected camera, robot, aircraft, drone, unmanned aerial vehicle (UAV), healthcare equipment, gaming console, personal digital assistants (PDA), set-top box, etc.”).
Regarding Claim 16, Sheshagiri teaches:
A system comprising:
one or more processing units to use a state machine to select (Sheshagiri, [0041] “the electronic device 102 may include a processor 112, a memory 126”), for two or more image frames generated using sensor data captured during a first time slice and representative of two or more overlapping viewpoints around an ego-machine in an environment (Sheshagiri, [0056] “The processor 112 may include and/or implement a stitching scheme selector 118. The stitching scheme selector 118 may select a stitching scheme 124 for stitching at least two images… the stitching scheme selector 118 may select a stitching scheme 124 from a set of stitching schemes (e.g., two or more stitching schemes) based on one or more content measures”; Sheshagiri, [0034] “Different approaches may be utilized for stitching. Static seam-based stitching may stitch along a fixed seam in an overlapping region… Dynamic seam-based stitching is another approach”; Sheshagiri, [0057] “the stitching scheme selector 118 may include a content analyzer 120. The content analyzer 120 may analyze the content of one or more images to determine one or more content measures. Examples of content measures may include a motion measure”; Sheshagiri, [0058] “the motion measure may be based on motion sensor (e.g., accelerometer) data…the motion measure may indicate an amount of movement (e.g., rotation, translation, etc.) of the electronic device 102”; Sheshagiri, [0063] “The stitching scheme selector 118 may select one or more stitching schemes 124 based on the one or more content measures. For example, if the motion measure is greater than a motion threshold, the stitching scheme selector 118 may select static seam-based stitching… If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”), based at least on a state of motion the ego-machine, between a default placement for a seam or a dynamic placement that attempts to avoid placing the seam in a salient region (Sheshagiri, [0078] “systems and methods disclosed herein may beneficially avoid parallax errors by selecting an appropriate stitching scheme”; Sheshagiri, [0090] “The dynamic seam determiner 764 may determine a seam that avoids going (e.g., cutting, crossing, etc.) through foreground regions (e.g., objects in the foreground of an image)”; Sheshagiri, Fig. 11, [0101] “A dynamic seam 1109 may be determined in order to avoid crossing an object 1105b (e.g., a foreground object, a foreground region, etc.)”) and to generate a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Sheshagiri, [0071] “The image stitcher 122 may stitch the images based on one or more selected stitching schemes 124. For example, the image stitcher 122 may stitch the images in accordance with the one or more stitching schemes 124 selected by the stitching scheme selector 118. In some approaches, the image stitcher 122 may utilize multiple stitching schemes 124, each for stitching one or more areas (e.g., an area of an overlapping region, a partitioned area, a sub-region, etc.) of the images. Examples of stitching schemes 124 may include static seam-based stitching, dynamic seam-based stitching, and dynamic warp-based stitching”; Sheshagiri, [0076] “The electronic device 102 may stitch 206 the at least two images based on a selected stitching scheme…FIG. 1. the electronic device 102 may perform image stitching (e.g., image fusion, combining, compositing, etc.) on the images with one or more selected stitching schemes”; Sheshagiri, [0066] “Blending the images may produce a blended output, which may be a weighted combination of the images (e.g., two input images)”).
While Sheshagiri teaches determining selecting the stitching scheme based on the content measure wherein the content measure may include the motion measure which indicates an amount of movement (e.g., rotation, translation, etc.) of the electronic device, it fails to explicitly teach:
based at least on a state of motion of the ego-machine in the environment;
In the same field of endeavor, Miao teaches:
based at least on a state of ego-motion of the ego-machine in the environment (Miao, page 2, S3: “S3: Rendering the 3D full scene view according to the vehicle state information acquired from the onboard sensor to form a panoramic auxiliary perspective view”; Miao, S4: “the 3D panoramic view may be adjusted according to vehicle state information, such as turning and reversing light vehicle state information”; Miao, page 5, detailed description, para 3, “on-vehicle sensor 1 is used to detect vehicle status information such as gear position information, steering information, and vehicle speed information”; Miao, page 6, para 5, “The device includes: a mapping unit 42, a seam selection and fusion unit 44, and a rendering unit 45. When the output unit 46 is specifically implemented, the mapping unit 42 maps the view information from the look-ahead camera into a preset stereoscopic environment model to form a panoramic scene view”); and
generate a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam (Miao, page 6, para 5 “The joint selection and fusion unit 44 selects and fuses the joints of the picture, and then the rendering unit is based on the onboard sensors. The acquired vehicle state information is rendered to a panoramic scene view to form a panoramic assisted viewing angle view; finally, a panoramic assistant viewing angle view is output through the output unit 46”; page 7, para 3, “Step S12: Selecting and blending the seams: Select the seam position in the overlapped area of the adjacent camera to fuse the textures on the two sides of the seam. The joints can be used for static joints and dynamic joints. Static joints are fixed joints in the camera coincidence area; dynamic joints are defined as non-fixed joints in the overlap area to minimize the texture difference between the two cameras within the seam width”).
Sheshagiri and Miao are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Sheshagiri with the method of using state of motion of the ego-machine as taught by Miao to make the invention that generates two or more image frames representative of two or more overlapping viewpoints around the ego-machine in an environment; determines, based at least on a state of motion of the ego-machine, whether to use a default placement for a seam or a dynamic placement to avoid placing the seam in a salient region; and generates a composite image based at least on stitching image data of the two or more image frames using the default placement or the dynamic placement of the seam; doing so can provide rendering the whole scene according to the vehicle state information acquired from the car sensors so as to form a panoramic auxiliary angle of view (Miao, abstract) and the visual field and accuracy of the on-board visual look-around system (Miao, Summary); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 17, Sheshagiri in view of Miao teaches the system of claim 16, the one or more processing units further to select the default placement based at least on a determination that there are no detected objects depicted in the two or more image frames that are closer to the ego-machine than a threshold proximity (Sheshagiri, [0078] “high parallax error 446 occurs due to the subject being close to the camera. For instance, there is a greater disparity in the position of the subject between cameras due to the closeness of the subject. It may also be observed that the background does not exhibit as much disparity. The systems and methods disclosed herein may beneficially avoid parallax errors by selecting an appropriate stitching scheme”; Sheshagiri, [0115] “the electronic device 102 may select grid points within a threshold distance from the static seam, a proportion of grid points that are closest to the static seam, one or more sets (e.g., columns, rows, etc.) of grid points that are nearest to the static seam”);
Regarding Claim 18, Sheshagiri in view of Miao teaches the system of claim 16, the one or more processing units further to select the dynamic placement based at least on a viewport rendering facing substantially backwards in the environment or the motion being substantially backwards in the environment (Sheshagiri, [0063] “the stitching scheme selector 118 may determine whether a match is unreliable (if the motion measure is not greater than a threshold, for example). If the match is unreliable, the stitching scheme selector 118 may select static seam-based stitching. If the motion measure is not greater than the motion threshold (and/or if the match is reliable, for example), the stitching scheme selector 118 may determine whether the disparity measure is greater than a disparity threshold or whether the coverage measure meets a coverage criterion (e.g., a coverage threshold). If the disparity measure is greater than the disparity threshold or if the coverage measure meets the coverage criterion, the scheme selector 118 may select dynamic warp-based stitching. If the disparity measure is not greater than the disparity threshold or if the coverage measure does not meet the coverage criterion, the scheme selector 118 may select dynamic seam-based stitching”; Miao, page 5, description, para 2, “the on-vehicle sensor 3 may be multiple, and the specific set quantity Can be adjusted according to actual needs, for example, the camera can be set in front, rear, left and right directions (forward/ backward). The on-vehicle sensor can also be set according to the detected different functions. For example, a square disk detection sensor, a vehicle speed detection sensor, a gear position detection sensor, etc. The on-vehicle sensor 1 is used to detect vehicle status information such as gear position information (forward/backward directions), steering information, and vehicle speed information”).
Regarding Claim 20, Sheshagiri in view of Miao teaches the processor of claim 11, wherein the processor is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs);a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Sheshagiri, [0140] “The electronic device 1802 may be (or may be included within) a camera, video camcorder, digital camera, cellular phone, smart phone, computer (e.g., desktop computer, laptop computer, etc.), tablet device, media player, television, automobile, personal camera, action camera, surveillance camera, mounted camera, connected camera, robot, aircraft, drone, unmanned aerial vehicle (UAV), healthcare equipment, gaming console, personal digital assistants (PDA), set-top box, etc.”).
Allowable Subject Matter
Claim 4, 8 – 9, 14 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20170140791 A1 MULTIPLE CAMERA VIDEO IMAGE STITCHING BY PLACING SEAMS FOR SCENE OBJECTS
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664