kNotice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Newly submitted claims 17-19 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons: claims 17-19 are directed to a species that is independent and distinct because it recites mutually exclusive characteristics (e.g., wherein the first image area is with a first resolution that is in positive correlation with the current motion speed of the vehicle, and the second image area is with a second resolution that remains identical and is uncorrelated with the current motion speed of the vehicle.) The species require a different field of search (e.g., searching different classes/subclasses for example G06T7/20, electronic resources, and/or employing different search queries), the prior art applicable to one species would not likely be applicable to another species, and/or the species are likely to raise different non-prior art issues under 35 U.S.C. 112(a), first paragraph.
Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 17-19 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03.
To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention.
Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention.
Response to Arguments
Applicant's arguments filed 09/16/2024 have been fully considered but they are not persuasive. Regarding claims 1, Applicant argues (pgs. 9-10 of the Remarks) that Bruekner fails to teach “wherein the transferring step reduces the resolution from the remaining area of the camera image, which is not in the current focus area, to the second image area of the environment image by combining a plurality of pixels of the camera image into a single pixel of the environment image.” Examiner respectfully disagrees. Brueckner teaches (¶0021, ¶0038, ¶0041, ¶0058) downsample the raw image capture data in one or more reduced resolution image portions, the area outside the region(s) of interest can correspond to the reduced resolution image portions with downsampled image capture data. In some examples, the area inside the region(s) of interest having a second resolution is such that the second resolution corresponds to the first resolution, e.g., the same full frame/full resolution capability of the image sensor within that portion of the image. In such examples, the raw image capture data obtained by the image sensor within the region(s) of interest can be retained without downsampling. Examiner notes that downsampling creates an image with lower resolution, thus the sections that are downsampled have a lower number of pixels by combining various pixels from the full resolution/raw image into one pixel that represents the pixels that where combined, thus generating the same image with less detail.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 9-11, 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner et al. (20180189574, hereinafter Brueckner) in view of Iandola et al. (US 20180275658, hereinafter Iandola), Yasuda et al. (US 20190197730, hereinafter Yasuda), Campbell (US 11250054), and Trofymov (US 20210208281.)
Regarding claim 1, “A method for providing an environment image on the basis of a camera image from a vehicle camera of a vehicle for monitoring an environment of the vehicle, the camera image having a camera image area with a camera resolution, for further processing, by a driving assistance system of the vehicle, comprising” Brueckner teaches (¶0019 and Fig. 5) obtaining images for use in vehicle applications. Regions of interest can be dynamically determined within an image frame based on real-time environmental factor data characterizing the operational environment of the image capture device and associated vehicle. The enhanced digital images then can be further analyzed in autonomous vehicle applications, such as those involving object detection and vehicle control; (¶0082 and Fig. 7) The digital image output provided at (708) can be further analyzed to identify at (710) at least one object in the digital image.
As to “determining a position of the vehicle camera on the vehicle” Brueckner teaches (¶0022) In some examples, the environmental factor data can include orientation data that indicates an orientation of the image capture device as positioned relative to a vehicle.
As to “capturing at least one current motion parameter of the vehicle” Brueckner teaches (¶0060) In some examples, real-time environmental factor data 120 can include, for example, terrain data including elevation values associated with a vehicle's current geographic location as obtained from a mapping system 512 provided at vehicle 502. Vehicle 502 can include a mapping system 512 including one or more location sensors for determining actual and/or relative position of vehicle 502 by using a satellite navigation positioning system (e.g. a GPS system, a Galileo positioning system, the Global Navigation Satellite System (GNSS), the BeiDou Satellite Navigation and Positioning system), an inertial navigation system, a dead reckoning system, based on IP address, by using triangulation and/or proximity to cellular towers or WiFi hotspots, beacons, and the like and/or other suitable techniques for determining position relative to a known geographic reference such as a navigational map of a given area. Because location sensors within mapping system 512 can determine where vehicle 502 is relative to geographic references, mapping system 512 can also determine terrain data 122 associated with the current geographic location of vehicle 502. Terrain data 122 can include elevation information (e.g., an altitude value) associated with a location (e.g., latitude and longitude coordinate values) as determined from an elevation database providing three-dimensional model values corresponding to different locations across the earth's surface.
As to “determining a current focus area within the camera image on the basis of the position of the vehicle camera on the vehicle and the at least one current motion parameter” Brueckner teaches (¶0019) The regions of interest can be dynamically determined within a target field of view having a smaller area than full image frames of an image sensor; (¶0022, ¶0088, and Fig. 7) determine one or more regions of interest within a full image frame based at least in part from the one or more image target parameters identified at (802), the differences in elevation determined at (804), the one or more sensed objects detected at (806), and/or the position and/or orientation of the image capture device determined at (808)
As to “transferring pixels of the camera image into the environment image, wherein pixels from the current focus area are transferred with a first resolution into a first image area of the environment image that corresponds to the current focus area, and pixels of the camera image from a remaining area that is not in the current focus area are transferred with a second resolution into a second image area of the environment image corresponding thereto, wherein the second resolution is lower than the first resolution.” Brueckner teaches (¶0019) Digital image outputs can be generated having a second resolution within the one or more regions of interest and a third resolution outside the one or more regions of interest, wherein the third resolution is less than the second resolution. In some examples, the second resolution is the same as the first resolution although the second can be less than the first resolution in other examples. Digital image outputs having different resolutions inside and outside the regions of interest provide a focused field of view on relevant image portions for specific applications while reducing file size and computational bandwidth for subsequent image processing.
As to “and wherein the transferring step reduces the resolution from the remaining area of the camera image, which is not in the current focus area, to the second image area of the environment image by combining a plurality of pixels of the camera image into a single pixel of the environment image.” Brueckner teaches (¶0021, ¶0038, ¶0041, ¶0058) downsample the raw image capture data in one or more reduced resolution image portions, the area outside the region(s) of interest can correspond to the reduced resolution image portions with downsampled image capture data. In some examples, the area inside the region(s) of interest having a second resolution is such that the second resolution corresponds to the first resolution, e.g., the same full frame/full resolution capability of the image sensor within that portion of the image. In such examples, the raw image capture data obtained by the image sensor within the region(s) of interest can be retained without downsampling. Examiner notes that downsampling creates an image with lower resolution, thus the sections that are downsampled have a lower number of pixels by combining various pixels from the full resolution/raw image into one pixel that represents the pixels that where combined, thus generating the same image with less detail.
Examiner notes that Brueckner teaches (¶0076) the autonomy system 646 can include various models to detect objects (e.g., people, animals, vehicles, bicycles, buildings, roads, road features, road conditions (e.g., curves, potholes, dips, bumps, changes in grade), navigational objects such as signage, distances between vehicle 502 and other vehicles and/or objects, etc.) based, at least in part, on the acquired image data, other sensed data and/or map data, the autonomy system 646 can include machine-learned models that use the digital image outputs acquired by the image capture device 100 or other data acquisition system(s) and/or the map data to help operate the vehicle 502.
Brueckner does not teach “and detecting and classifying objects in the environment image using neural networks.” However, Iandola teaches (¶0064) detection models determine which ROIs contain objects, and then it classifies the objects into categories such as guard rails, road signs, traffic cones, bicyclists, animals, or vehicles. The detection models may be convolutional neural network models to identify ROIs and classify objects. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner with the neural network for object detection/classification as taught by Iandola for the benefit of having an autonomous control system on the vehicle to guide the vehicle in a safe manner that is capable of predictive modeling, adaptive control, can learn from experience, and can derive conclusions from complex and seemingly unrelated sets of information.
Brueckner and Iandola do not teach “wherein the current focus area is adjusted to be smaller at high speed and larger at low speed.” However, Yasuda teaches (¶0101 and Fig. 12) in the case where the moving speed of the automobile 1 is faster than 0 and slower than 30 km/h, a focus area FA1 is decided on the basis of a focus distance L1 and an angle of view θ1. In addition, in the case where the moving speed of the automobile 1 is 30 km/h or faster and slower than 60 km/h, a focus area FA2 is decided on the basis of L2 longer than the focus distance L1 and θ2 narrower than the angle of view θ1. Further, in the case where the moving speed of the automobile 1 is 60 km/h or faster, a focus area FA3 is decided on the basis of L3 longer than focus distance L2 and θ3 narrower than angle of view θ2. As described above, the semiconductor device 400 according to the fourth embodiment sets the focus distance to be shorter and the angle of view to be wider as the moving speed of the automobile 1 becomes slower. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner and Iandola with the focus adjustment based on vehicle speed as taught by Yasuda in order to capture pertinent objects in the environment (¶0099.)
Brueckner, Iandola, and Yasuda do not teach “applying fisheve distortion correction to the camera image when the vehicle camera is a fisheve camera.” However, Campbell teaches (20:64-67) The video analytics module 490 may be configured to correct distortion caused by the lens 456. For example, de-warping may be performed to correct distortions caused by a wide (e.g., fisheye) lens. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, and Yasuda with the de-warping as taught by Campbell for the benefit of more accurately detecting and/or analyzing objects in the surroundings.
Brueckner, Iandola, Yasuda, and Campbell do not teach “and wherein the current focus area is aligned with a horizon line in the camera image that separates ground from sky.” However, Trofymov teaches (¶0033, Figs. 9 and 12A-12C) The virtual horizon may indicate a horizon elevation line in the absence of certain obscuring elements (e.g., hills, tree lines, other vehicles) within a driving environment, or a suitable vertical look direction approximately separating horizontal surface elements of the driving environment from those substantially above the surface. The imaging system may adjust the vertical orientation of the entire sensor, the vertical field or regard, and/or the area of focus (e.g., changing the density of lidar scan lines in one or more vertical regions) in response to the determined VROI. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, Yasuda, Campbell, and Trofymov with the horizon line determination as taught by Trofymov for the benefit of better understanding of the surrounding environment/objects when changing planes for example going uphill/downhill.
Regarding claim 2, Brueckner and Iandola do not teach “The method as claimed in claim 1, wherein capturing at least one current motion parameter of the vehicle comprises capturing a direction of motion of the vehicle.” However, Yasuda teaches (¶0099-¶0101) the focus area decision circuit 431 may obtain steering angle information of the automobile 1 in addition to the moving speed to decide the position of the angle of view in accordance with the obtained steering angle information. For example, in the case where the automobile 1 travels in the right direction on the basis of the steering angle information, the angle of view may be shifted to the right side. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner and Iandola with focus decision determination as taught by Yasuda for the benefit of determining where to best define the region of interest where objects might be present.
Regarding claim 3, Brueckner and Iandola do not teach “The method as claimed in claim 1, wherein capturing at least one current motion parameter of the vehicle comprises capturing a current motion speed of the vehicle.” However, Yasuda teaches (¶0099-¶0101) In FIG. 12, in the case where the moving speed of the automobile 1 is faster than 0 and slower than 30 km/h, a focus area FA1 is decided on the basis of a focus distance L1 and an angle of view. The focus area decision circuit 431 may obtain steering angle information of the automobile 1 in addition to the moving speed to decide the position of the angle of view in accordance with the obtained steering angle information. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner and Iandola with focus decision determination as taught by Yasuda for the benefit of determining where to best define the region of interest where objects might be present.
Regarding claim 4, “The method as claimed in claim 2, wherein capturing at least one current motion parameter of the vehicle comprises capturing a change in the position of objects between camera images or environment images that were provided with a time delay, and determining the current direction of motion and/or the current motion speed of the vehicle on the basis of the captured change in the position of the objects.” Yasuda teaches (¶0099 and ¶0047) the focus area decision circuit 431 has a function of deciding a focus area when deciding a subject on the basis of object data. The focus area decision circuit 431 may have a function of receiving the object data from the object data generation circuit 131 to decide the focus area in accordance with the received object data; The image processing circuit 130 calculates a moving vector of the subject on the basis of a plurality of pieces of distance data obtained at a plurality of different times. Further, the image processing circuit 130 estimates a relative position between the camera 980 and the subject at the imaging estimated time on the basis of the moving vector. Then, the image processing circuit 130 decides the imaging conditions at the imaging estimated time on the basis of the relative position, and instructs the camera 980 to image at the imaging estimated time in accordance with the decided imaging conditions. In addition to the above-described function, the image processing circuit 130 can also receive a plurality of pieces of image data generated by the camera 980 to synthesize the received plural pieces of image data; (¶0050) the focal distance decision circuit 139 estimates a relative position between the automobile 1 and the subject at the imaging estimated time on the basis of the data related to the movement of the automobile 1, the subject data, the imaging estimated time, and the moving vector data; (¶0051) data generated by the own vehicle position estimation circuit 135 will be hereinafter referred to as own vehicle position estimation data. The own vehicle position estimation data is position estimation data of the automobile 1 that is a moving object, and is data for correcting the subject position estimation data; (¶0047 and ¶0052) The subject position estimation circuit 136 has a function of estimating a relative position with the subject at the imaging estimated time on the basis of the subject data, the moving vector data, and the imaging estimated time. The subject position estimation circuit 136 receives the detected moving vector data from the moving vector calculation circuit 132 and the subject data from the subject decision circuit 133. In addition, the subject position estimation circuit 136 receives data related to the imaging estimated time from the imaging time decision circuit 134. The subject position estimation circuit 136 calculates a relative position with the subject at the imaging estimated time on the basis of these pieces of received data, and transmits the calculated data to the focal distance correction circuit 137.
Regarding claim 5, “The method as claimed in claim 1, wherein determining a current focus area within the camera image on the basis of the position of the vehicle camera on the vehicle and the at least one current motion parameter comprises selecting the current focus area from a plurality of predefined focus areas.” Brueckner teaches (¶0019, ¶0022) Regions of interest (plural) can be dynamically determined within an image frame based on real-time environmental factor data characterizing the operational environment of the image capture device and associated vehicle. The regions of interest can be dynamically determined within a target field of view having a smaller area than full image frames of an image sensor. Example real-time environmental factor data can include, for example, terrain data associated with a vehicle's current geographic location, orientation data indicating an orientation of the image capture device as positioned relative to the vehicle, and/or sensed object data identifying the location of sensed objects near the vehicle. (¶0025, ¶0026) In some examples, the target field of view is established relative to one or more predefined target field of view definitions to limit the possible locations of the one or more regions of interest within a target field of view.
Regarding claim 6, “The method as claimed in claim 1, wherein in that determining a current focus area within the camera image on the basis of the position of the vehicle camera on the vehicle and the at least one current motion parameter comprises determining a horizontal position of the current focus area based on the camera image as a right-hand focus area, a middle focus area or a left-hand focus area.” Brueckner teaches (¶0045, ¶0049, ¶0052 and Fig. 4)The relative location of each region of interest 308 within area 302 can be defined relative to a top spacing 314, a bottom spacing 316, a left spacing 318 and a right spacing 320. Top spacing 314 corresponds to a distance between a top edge 324 of area 302 and a top edge 334 of region of interest 308. Bottom spacing 316 corresponds to a distance between a bottom edge 326 of area 302 and a bottom edge 336 of region of interest 308. Left spacing 318 corresponds to a distance between a left edge 328 of area 302 and a left edge 338 of region of interest 308. Right spacing 320 corresponds to a distance between a right edge 330 of area 302 and a right edge 340 of region of interest 308. The relative location of region of interest 308 within area 302 in terms of upper spacing 314, lower spacing 316, left spacing 318 and right spacing 320 can by dynamically determined by the one or more portions of environmental factor data 120.
Regarding claim 7, “The method as claimed in claim 1, wherein determining a current focus area within the camera image on the basis of the position of the vehicle camera on the vehicle and the at least one current motion parameter comprises determining a size of the current focus area within the camera image.” Brueckner teaches (¶0079 and Fig. 1,5,8) Determining one or more regions of interest at (704) can include determining a size of each region of interest (e.g., one or more of a height dimension or a height percentage corresponding to a portion of the height dimension of a full image frame or a target field of view and/or a width dimension or a width percentage corresponding to a portion of the width dimension of a full image frame or a target field of view). he one or more regions of interest determined at (704) can be based at least in part on one or more portions of environmental factor data including real time data indicative of one or more environmental factors defining the operational environment of the image capture device as positioned relative to a vehicle, such as environmental factor data described with more particular reference to FIGS. 1 and 5. More particular details regarding determining regions of interest at (704) based on environmental factor data and/or other parameters are presented with reference to FIG. 8.
Regarding claim 9, “The method as claimed in claim 1, wherein transferring pixels of the camera image into the environment image, wherein pixels from the current focus area are transferred with a first resolution into a first image area of the environment image that corresponds to the current focus area, and pixels of the camera image from a remaining area that is not in the current focus area are transferred with a second resolution into a second image area of the environment image corresponding thereto, wherein the second resolution is lower than the first resolution, comprises transferring the pixels in a vertical direction with a lower resolution than in a horizontal direction.” Brueckner teaches (Fig. 1 and ¶0039) each column of pixels in the image output 117 demonstrates a change in resolution, the pixels found in region of interest 118 are at a higher resolution than those found in the areas outside the region of interest 119; (¶0021 and ¶0036) image is a digital image captured at a plurality of pixels.
Regarding claim 10, “The method as claimed in claim 1, wherein transferring pixels of the camera image into the environment image comprises reducing the resolution from the remaining area of the camera image, which is not in the current focus area, to the second image area of the environment image corresponding thereto” Brueckner teaches (¶0019) Digital image outputs can be generated having a second resolution within the one or more regions of interest and a third resolution outside the one or more regions of interest, wherein the third resolution is less than the second resolution. In some examples, the second resolution is the same as the first resolution although the second can be less than the first resolution in other examples. Digital image outputs having different resolutions inside and outside the regions of interest provide a focused field of view on relevant image portions for specific applications while reducing file size and computational bandwidth for subsequent image processing.
Regarding claim 11, “The method as claimed in claim 9, wherein transferring pixels of the camera image into the environment image comprises reducing the resolution from the current focus area of the camera image to the first image area of the environment image that corresponds to the current focus area.” Brueckner teaches (Fig. 1, ¶0019, ¶0021, ¶0038) how raw image data is downsampled to a first resolution found in the region of interest.
Regarding claim 14, “An image unit for providing an environment image on the basis of a camera image from a vehicle camera of a vehicle for monitoring an environment of the vehicle, for further processing, by a driving assistance system of the vehicle, comprising: at least one vehicle camera for providing the camera image having a camera image area and a camera resolution; and a control unit which is connected to the at least one vehicle camera via a data bus and receives the respective camera image via the data bus, wherein the image unit is configured to carry out the method for providing an environment image as claimed in claim 1 for the at least one vehicle camera.” Brueckner teaches (Fig. 6, ¶0037) system includes an image sensor/capture device, image processor(s) 110, data links 116, a communication channel 612, and a vehicle control system. See rejection for claim 1.
Regarding claim 15, “A driving assistance system for a vehicle for providing at least one driving assistance function based on monitoring an environment of the vehicle, comprising at least one image unit as claimed in claim 14.” Brueckner teaches (¶0019, ¶0074-¶0076, ¶0081) system described is for autonomous vehicle applications.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner, Iandola, Yasuda, Campbell, and Trofymov in view of Jeong et al. (US 20210337264, hereinafter Jeong.)
Regarding claim 8, “The method as claimed in claim 1, wherein transferring pixels of the camera image into the environment image comprises transferring the pixels with a continuous transition between the first and the second image area.” Brueckner teaches (Fig. 1 and ¶0039) transferring between the full frame / full resolution raw image capture 115 to the digital image output including the first and second portions with different resolutions 117, shows pixels demonstrating continuity of the image; (¶0021 and ¶0036) image is a digital image captured at a plurality of pixels.
Brueckner, Iandola, Yasuda, Campbell, and Trofymov do not teach “without abrupt change in resolution of the environment image.” However, Jeong teaches (¶0007, ¶0052, ¶0067) determining a source resolution of the first video as a first resolution of a first region corresponding to a target resolution of the grid, determining the axes of the grid such that a resolution of a remaining second region other than the first region is down-sampled to a second resolution lower than the first resolution, and determining the axes of the grid such that a resolution of third regions adjacent to the first region is down-sampled to third resolutions gradually changed from the first resolution to the second resolution. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, Yasuda, Campbell, and Trofymov with gradually changing resolution as taught by Jeong for the benefit of outputting a video that is less visually jarring/more aesthetically pleasing for the user.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner, Iandola, Yasuda, Campbell, and Trofymov in view of Lee et al. (US 20210192231, hereinafter Lee.)
Regarding claim 12, Brueckner, Iandola, Yasuda, Campbell, and Trofymov do not teach “The method as claimed in claim 1, transferring pixels of the camera image into the environment image comprises increasing the resolution from the current focus area of the camera image to the first image area of the environment image that corresponds to the current focus area.” However, Lee teaches (¶0037, ¶0031, ¶0042) the ROIs may be subject to upscaling and/or downscaling; (¶0042) upscaling ROIs containing small/distant objects. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, Yasuda, Campbell, and Trofymov with the upscaling as taught by Lee for the benefit of detecting small/distant objects due to the larger number of pixels representing the objects as compared to the original image.7
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner, Iandola, Yasuda, Campbell, and Trofymov in view of Okada (US 20180352167, hereinafter Okada.)
Regarding claim 13, Brueckner, Iandola, Yasuda, Campbell, and Trofymov do not teach “The method as claimed in claim 1, wherein transferring pixels of the camera image into the environment image comprises transferring the pixels of the camera image into the environment image based on a transfer rule in a look-up table for differently determined focus areas.” However, Okada teaches (¶0007, ¶0061, ¶0069,) a storage unit that stores, in a lookup table, a correspondence relationship between distance information with respect to a subject and lens position information of the focus lens. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, Yasuda, Campbell, and Trofymov with the general concept of using a look-up table as taught by Okada for the benefit of performing focus control without depending on environmental conditions and optical conditions (¶0105.)
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner, Iandola, Yasuda, Campbell, and Trofymov in view of Vallespi-Gonzalez et al. (US 20190171912, hereinafter Vallespi-Gonzalez.)
Regarding claim 16, Brueckner, Iandola, Yasuda, Campbell, and Trofymov do not teach “The method as claimed in claim 1, wherein the current focus area is an oval region.” However, Vallespi-Gonzalez teaches (¶0021) system for determining object of interest; (¶0136 and Fig.5) generating ellipses/ovals on a display where object of interest/zone of interest is located. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Brueckner, Iandola, Yasuda, Campbell, and Trofymov with the an oval region as taught by Vallespi-Gonzalez for the benefit of better matching the object shape.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kuehnle et al. (US 20160086040) – (Fig. 3 and ¶0033) The truck cameras 22, 24 are mounted relatively close to the truck body, i.e., closer than a cyclist typically approaches a truck. For this reason, the wheels are seen by the cameras from the side as ellipses. Ellipse detection methods may use random sampling to increase calculation speed. Ellipse detection is combined with wheel tracking to focus on those areas (and sample more densely) where wheels are expected to be positioned once tracking is initiated.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK J JOHNSON whose telephone number is (571)272-9629. The examiner can normally be reached 9:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian T. Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Frank Johnson/Primary Examiner, Art Unit 2425