DETAILED ACTION
Notice of AIA Status
The present application is being examined under the AIA the first inventor to file provisions.
Response to amendments
Applicant’s remarks filed 08/06/2025 regarding the 101-rejection submitted in the non-final office action dated 12/17/2024 is considered however the applicants amendment has overcome the 101 rejection.
The applicant states on page 13 “dependent claims 5, 8, 10, 18, and 28 have been amended for consistency” the office respectfully brings to applicants attention the dependent claims that have been amended are claims 5, 8, 10, 15, 18, and 28.
Response to Arguments
Applicant’s arguments see remarks, filed 08/06/2025, with respect to the claim 1-6, 8, 10-16, 18, 20, 21-28, and 30 have been fully considered but are moot because the arguments do not apply to the current combinations of references being used in the current rejection.
Claim Objections
Claims 25 and 27 are objected to because of the following informalities:
In claim 25, Line 7 the term “the detected object” should be changed to “the detected vehicle” for typographical/grammar issues to avoid clarity issues to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
In claim 25, Line 12 the term “the detected object” should be changed to “the detected vehicle” for typographical/grammar issues to avoid clarity issues to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim 27 does not further limit the independent claim 21, the office advises either amending or canceling the claim to avoid to prevent a rejection under 35 U.S.C. 112(d). Please see claims 7 and 17 with similar limitations that were canceled with this claim set.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claims 1, 11, 12, and 14-15, recites limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f):
Claim 1; recites the limitation, “obtaining, using at least one processing device ……,” [Line 3].
Claim 1; recites the limitation, “using the at least one processing device ……,” [Line 6].
Claim 1; recites the limitation, “identifying, using the at least one processing device……,” [Line 9].
Claim 1; recites the limitation, “determining using at least one processing device……,” [Line 2].
Claim 11; recites the limitation, “one processing device configured to……,” [Line 2].
Claim 12; recites the limitation, “one processing device is further configured to……,” [Line 11].
Claim 14; recites the limitation, “one processing device is configured to……,” [Line 22-23].
Claim 15; recites the limitation, “one processing device is configured to……,” [Line 27].
Claim 15; recites the limitation, “one processing device is configured to……,” [Line 30].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
After a careful analysis, as disclosed above, and a careful review of the specification the following limitations in claim 1, 11, 12, and 14-15:
“Processing device” (Fig. 12, #1202. Paragraph [0098])- “As shown in FIGURE 12, the device 1200 denotes a computing device or system that includes at least one processing device 1202, at least one storage device 1204, at least one communication unit 1206, and at least one input/output (I/O) unit 1208. The processing device 1202may execute instructions that can be loaded into a memory 1210. The processing device 1202 includes any suitable number(s) and type(s) of processors or other processing devices in any suitable arrangement. Example types of processing devices 1202 include one or more microprocessors, microcontrollers, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or discrete circuitry.” thus, have sufficient structure or material wherein is hardware containing any suitable number and types of processors or other processing devices including one or more microprocessors, microcontrollers, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or discrete circuitry.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 11-16, 21-27, are rejected under 35 U.S.C 103 as being unpatentable over Fernandez et al. (US 20140133698 A1) hereafter referenced as Fernandez in view of Takahashi et al (US 20160014406 A1) hereafter referenced as Takahashi, Liu et al. (US 20190103026 A1) hereafter referenced as Liu, and Tan et al (US 20230005173 A1) hereafter referenced as Tan.
Regarding claim 1, Fernandez teaches a method comprising (Fig. 1, Paragraph [0020]-Fernandez discloses the present disclosure provides a method of detecting an object of interest in an image):
obtaining, using the at least one processing device (Fig. 2-3, Paragraph [0030]- Fernandez discloses the present disclosure provides a computer for detecting objects of interest comprising: [0031] a. a processor; [0032] b. a camera; and [0033] c. a memory having stored there software instructions),
a refined boundary (Fig. 4 and 9, Abstract- Fernandez discloses the RLE and SAT are used to identify candidate objects and to iteratively refine their boundaries.)
identifying a specified portion of a detected vehicle within a scene (Fig. 4 and 9, Paragraph [0078]- Fernandez discloses this idealized signature can be extrapolated to fit imperfect images, where the boundaries of a relatively rectangular object may be constructed as a "box," starting with a long run of relatively intense pixels),
repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420.)
identifying, using the at least one processing device, one or more locations of one or more components of the detected vehicle based on the identified regions and the determined similarities (Fig. 1, paragraph [0047]- Fernandez discloses for example, simple Haar features are implemented to achieve corner detection, thereby reducing MIPS. Further, the symmetry, corner, and shadow detection features are performed on vehicle(s) in the previous image frame, instead of possible vehicle(s) in the current frame, thereby providing deterministic MIPS.);
identifying, using the at least one processing device, at least one action to be performed based on the one or more locations of the one or more components of the detected vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
and performing the identified at least one action (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.).
Fernandez fails to explicitly teach determining, using at least one processing device, a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
However, Takahashi explicitly teaches determining, using at least one processing device, a position of a vanishing point based on multiple collections of line segments in an image comprising image data (Fig. 22, Paragraph [0383]- Takahashi discloses the vanishing point information can be identified using a white line on a road face displayed on a captured image and vehicle operation information. Further in Fig. 22, Paragraph [0388]- Takashi discloses. Further, the y coordinate Vy of the vanishing point can be obtained from the intercept of approximated straight line of the road face obtained by the previous processing.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi determining, using at least one processing device, a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein determining, using at least one processing device, a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez fails to explicitly teach the refined boundary based on an original boundary refined based on the vanishing point.
However, a new prior art Liu (US 20190103026 A1) explicitly teaches the refined boundary based on an original boundary refined based on the vanishing point (Fig. 6C, Paragraph [0044]- Liu discloses the tracker 330 may scale the bounding box 630 based on a vanishing point 640 or projected lines intersecting the vanishing point 640 in the 2D image frame. The tracker 330 may predict how the bounding box 630 may change as the detected object moves closer toward a provider's vehicle 140, e.g., based on changes in a vertical direction of the image frames.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Liu the refined boundary based on an original boundary refined based on the vanishing point.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the refined boundary based on an original boundary refined based on the vanishing point.
The motivation behind the modification would have been to allow for more efficient tracking of vehicles, since both Fernandez and Liu are both systems for detecting vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Liu’s system provides an increase in efficiency tracking detected vehicles. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Liu et al. (US 20190103026 A1), Paragraph [0029-30]).
Fernandez fails to explicitly teach and (ii) determining a similarity of the image data contained within the multiple regions.
However, Tan explicitly teaches and (ii) determining a similarity of the image data contained within the multiple regions (Fig. 1, Paragraph [0140]- Tan discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi and Liu of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan (ii) determining a similarity of the image data contained within the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein (ii) determining a similarity of the image data contained within the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 2, Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 1,
Fernandez explicitly teaches further comprising: obtaining an image of the scene (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle),
the image comprising the image data (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle);
generating an integral image of the scene (Fig. 1, Paragraph [0060-61]- Fernandez discloses in a separate branch of FIG. 1, in block 110, a summed area table (SAT) is produced for a prior image and The SAT is produced only once for each image, and greatly simplifies calculating sums for other operations.);
Fernandez fails to explicitly teach and during each iteration, determining a probabilistic distribution associated with each of the multiple regions based on the integral image.
However, Tan explicitly teaches during each iteration, determining a probabilistic distribution associated with each of the multiple regions based on the integral image (Fig. 1, Paragraph [0139]- Tan discloses the uncertainty based inconsistency computation computes a cross-modal inconsistency with probability distributions of each true positive pair from each object detection network. The uncertainty based inconsistency computations can be further divided two sub-groups: bounding box and heatmap.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of tan of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan during each iteration, determining a probabilistic distribution associated with each of the multiple regions based on the integral image.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein during each iteration, determining a probabilistic distribution associated with each of the multiple regions based on the integral image.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 3, Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 2,
Fernandez explicitly teaches wherein the probabilistic distributions represent normal distribution parameterizations (Fig. 13, Paragraph [0075]- Fernandez discloses in a real-world image with degrees of intensity and imperfect edges, the vertical sum will have an imperfect but characteristic "peak" or local maximum, with a steep rise and fall around the peak, as in FIG. 13., this shows a normal distribution).
Regarding claim 4, Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 2,
Fernandez explicitly teaches wherein, during each iteration, determining the similarity of the image data contained within the multiple regions comprises (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420. Intensity sums are taken for both the left side (A) and the right side (B) of segment 422. A difference of these sums is then computed (A-B). The sum is not processor intensive because it is based on SAT 240 (FIG. 2). Shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.)
Fernandez fails to explicitly teach, determining a divergence between the probabilistic distributions associated with the multiple regions.
However, Tan explicitly teaches determining a divergence between the probabilistic distributions associated with the multiple regions (Fig. 1, Paragraph [0140]- Tan Discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan determining a divergence between the probabilistic distributions associated with the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles determining a divergence between the probabilistic distributions associated with the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 5 Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 1,
Fernandez explicitly teaches wherein: each iteration generates coordinates based on the identified regions within the refined boundary (Fig. 4A, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420. Intensity sums are taken for both the left side (A) and the right side (B) of segment 422. A difference of these sums is then computed (A-B). The sum is not processor intensive because it is based on SAT 240 (FIG. 2). In a practical application, shape 420 is centered on object 410 when the difference is at a minimum, as shown in FIG. 4A.);
and identifying the one or more locations of the one or more components of the detected vehicle comprises generating a weighted combination of the coordinates while using the determined similarities (Fig. 2, Paragraph [0082]- Fernandez discloses core 220 then in block 260 performs corner, symmetry, and shadow detection operations on candidate objects in a prior iteration of the image using SAT 240. The operations of block 260 provide a set of candidate objects, which are provided to block 270, where HoG/SVM is performed on the candidate objects. Tracking is implemented in block 290 on all objects classified as vehicles for the current frame.),
the one or more locations of the one or more components of the detected vehicle representing the weighted combination (Fig. 4A, Paragraph [0064]- Fernandez discloses Shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.).
Regarding claim 6, Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 1,
Fernandez explicitly teaches wherein, during each iteration, the multiple regions within the refined boundary include a first region (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the first region can be the left or right sides)) and
a mirrored second region within the refined boundary (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the mirrored second region can be the left or right sides whichever the first region is not)).
Regarding claim 11, Fernandez teaches an apparatus comprising (Fig. 2-3, Paragraph [0092]- Fernandez discloses further, the operations and steps described with reference to the preceding FIGURES illustrate only some of the possible scenarios that may be executed by, or within, the various apparatuses, processors, devices, and/or systems, described herein.):
at least one processing device (Fig. 2-3, Paragraph [0030]- Fernandez discloses the present disclosure provides a computer for detecting objects of interest comprising: [0031] a. a processor; [0032] b. a camera; and [0033] c. a memory having stored there software instructions):
configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle. Further in Fig. 4 and 9, Paragraph [0078]- Fernandez discloses this idealized signature can be extrapolated to fit imperfect images, where the boundaries of a relatively rectangular object may be constructed as a "box," starting with a long run of relatively intense pixels),
the refined boundary associated with image data (Fig. 4 and 9, Paragraph [0078]- Fernandez discloses this idealized signature can be extrapolated to fit imperfect images, where the boundaries of a relatively rectangular object may be constructed as a "box," starting with a long run of relatively intense pixels);
repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420.) and
identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities (Fig. 1, paragraph [0047]- Fernandez discloses for example, simple Haar features are implemented to achieve corner detection, thereby reducing MIPS. Further, the symmetry, corner, and shadow detection features are performed on vehicle(s) in the previous image frame, instead of possible vehicle(s) in the current frame, thereby providing deterministic MIPS.).
identify at least one action to be performed based on the one or more locations of the one or more components of the detected vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
and performing the identified at least one action (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.).
Fernandez fails to explicitly teach determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
However, Takahashi explicitly teaches determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez fails to explicitly teach the refined boundary based on an original boundary refined based on the vanishing point.
However, a new prior art Liu (US 20190103026 A1) explicitly teaches the refined boundary based on an original boundary refined based on the vanishing point (Fig. 6C, Paragraph [0044]- Liu discloses the tracker 330 may scale the bounding box 630 based on a vanishing point 640 or projected lines intersecting the vanishing point 640 in the 2D image frame. The tracker 330 may predict how the bounding box 630 may change as the detected object moves closer toward a provider's vehicle 140, e.g., based on changes in a vertical direction of the image frames.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Liu the refined boundary based on an original boundary refined based on the vanishing point.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the refined boundary based on an original boundary refined based on the vanishing point.
The motivation behind the modification would have been to allow for more efficient tracking of vehicles, since both Fernandez and Liu are both systems for detecting vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Liu’s system provides an increase in efficiency tracking detected vehicles. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Liu et al. (US 20190103026 A1), Paragraph [0029-30]).
Fernandez fails to explicitly teach (ii) determine a similarity of the image data contained within the multiple regions.
However, Tan explicitly teaches (ii) determine a similarity of the image data contained within the multiple regions (Fig. 1, Paragraph [0140]- Tan discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi and Liu of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan (ii) determine a similarity of the image data contained within the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein (ii) determine a similarity of the image data contained within the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 12, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 11,
Fernandez explicitly teaches wherein the at least one processing device is further configured to: obtain an image of the scene, the image comprising the image data (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle);
generate an integral image of the scene (Fig. 1, Paragraph [0060-61] Fernandez discloses in a separate branch of FIG. 1, in block 110, a summed area table (SAT) is produced for a prior image and The SAT is produced only once for each image, and greatly simplifies calculating sums for other operations.);
Fernandez fails to explicitly teach during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
However, Tan explicitly teaches during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image (Fig. 1, Paragraph [0139]- Tan discloses the uncertainty based inconsistency computation computes a cross-modal inconsistency with probability distributions of each true positive pair from each object detection network. The uncertainty based inconsistency computations can be further divided two sub-groups: bounding box and heatmap.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 13, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 12,
Fernandez explicitly teaches wherein the probabilistic distributions represent normal distribution parameterizations (Fig. 13, Paragraph [0075]- Fernandez discloses in a real-world image with degrees of intensity and imperfect edges, the vertical sum will have an imperfect but characteristic "peak" or local maximum, with a steep rise and fall around the peak, as in FIG. 13.; this shows a normal distribution).
Regarding claim 14, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 12,
Fernandez explicitly teaches wherein, to determine the similarity of the image data contained within the multiple regions during each iteration (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420. Intensity sums are taken for both the left side (A) and the right side (B) of segment 422. A difference of these sums is then computed (A-B). The sum is not processor intensive because it is based on SAT 240 (FIG. 2). Shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.).
Fernandez fail to explicitly teach, the at least one processing device is configured to determine a divergence between the probabilistic distributions associated with the multiple regions.
However, Tan explicitly teaches the at least one processing device is configured to determine a divergence between the probabilistic distributions associated with the multiple regions (Fig. 1, Paragraph [0140]- Tan discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan wherein, the at least one processing device is configured to determine a divergence between the probabilistic distributions associated with the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein, the at least one processing device is configured to determine a divergence between the probabilistic distributions associated with the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 15 Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 11,
Fernandez explicitly teaches wherein: during each iteration, the at least one processing device is configured to generate coordinates based on the identified regions within the refined boundary (Fig. 4A, Paragraph [0064]- Fernandez discloses in a practical application, shape 420 is centered on object 410 when the difference is at a minimum, as shown in FIG. 4A.);
and to identify the one or more locations of the one or more components of the detected vehicle, the at least one processing device is configured to generate a weighted combination of the coordinates while using the determined similarities as weights for the coordinates (Fig. 2, Paragraph [0082]- Fernandez discloses core 220 then in block 260 performs corner, symmetry, and shadow detection operations on candidate objects in a prior iteration of the image using SAT 240. The operations of block 260 provide a set of candidate objects, which are provided to block 270, where HoG/SVM is performed on the candidate objects. Tracking is implemented in block 290 on all objects classified as vehicles for the current frame.),
the one or more locations of the one or more components of the detected vehicle representing the weighted combination (Fig. 4A, Paragraph [0064]- Fernandez discloses shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.).
Regarding claim 16, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 11,
Fernandez explicitly teaches wherein, during each iteration, the multiple regions within the refined boundary include a first region (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the first region can be the left or right sides)) and
a mirrored second region within the refined boundary (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the mirrored second region can be the left or right sides whichever the first region is not)).
Regarding claim 21, Fernandez teaches a non-transitory machine-readable medium containing instructions that when executed cause (Fig. 2-3, Paragraph [0088]-Fernandez discloses moreover, it should be noted that the use of complementary electronic devices, hardware, non-transitory software, etc. offer an equally viable option for implementing the teachings of the present disclosure.):
at least one processor to (Fig. 2-3, Paragraph [0030]- Fernandez discloses the present disclosure provides a computer for detecting objects of interest comprising: [0031] a. a processor; [0032] b. a camera; and [0033] c. a memory having stored there software instructions):
obtain a refined boundary identifying a specified portion of a detected object within a scene (Fig. 4 and 9, Paragraph [0078]- Fernandez discloses this idealized signature can be extrapolated to fit imperfect images, where the boundaries of a relatively rectangular object may be constructed as a "box," starting with a long run of relatively intense pixels),
the refined boundary associated with image data (Fig. 4 and 9, Paragraph [0078]- Fernandez discloses this idealized signature can be extrapolated to fit imperfect images, where the boundaries of a relatively rectangular object may be constructed as a "box," starting with a long run of relatively intense pixels);
repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420.) and
identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities (Fig. 1, Paragraph [0047]- Fernandez discloses for example, simple Haar features are implemented to achieve corner detection, thereby reducing MIPS. Further, the symmetry, corner, and shadow detection features are performed on vehicle(s) in the previous image frame, instead of possible vehicle(s) in the current frame, thereby providing deterministic MIPS.).
identify at least one action to be performed based on the one or more locations of the one or more components of the detected vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
and perform the identified at least one action (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.).
Fernandez fails to explicitly teach determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
However, Takahashi explicitly teaches determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein determine a position of a vanishing point based on multiple collections of line segments in an image comprising image data.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez fails to explicitly teach the refined boundary based on an original boundary refined based on the vanishing point.
However, a new prior art Liu (US 20190103026 A1) explicitly teaches the refined boundary based on an original boundary refined based on the vanishing point (Fig. 6C, Paragraph [0044]- Liu discloses the tracker 330 may scale the bounding box 630 based on a vanishing point 640 or projected lines intersecting the vanishing point 640 in the 2D image frame. The tracker 330 may predict how the bounding box 630 may change as the detected object moves closer toward a provider's vehicle 140, e.g., based on changes in a vertical direction of the image frames.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi in view of a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Liu the refined boundary based on an original boundary refined based on the vanishing point.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the refined boundary based on an original boundary refined based on the vanishing point.
The motivation behind the modification would have been to allow for more efficient tracking of vehicles, since both Fernandez and Liu are both systems for detecting vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Liu’s system provides an increase in efficiency tracking detected vehicles. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Liu et al. (US 20190103026 A1), Paragraph [0029-30]).
Fernandez fails to explicitly teach (ii) determine a similarity of the image data contained within the multiple regions.
However, Tan explicitly teaches ii) determine a similarity of the image data contained within the multiple regions (Fig. 1, Paragraph [0140]- Tan discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi and Liu of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan ii) determine a similarity of the image data contained within the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein ii) determine a similarity of the image data contained within the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 22, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches further containing instructions that when executed cause the at least one processor to obtain an image of the scene (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle),
the image comprising the image data (Fig. 6, Paragraph [0053]- Fernandez discloses FIG. 6 discloses an image 600 of an exemplary road scene, which may be taken, for example, from a video stream of an onboard-camera of an ADAS-equipped vehicle);
generate an integral image of the scene (Fig. 1, Paragraph [0060-61] Fernandez discloses in a separate branch of FIG. 1, in block 110, a summed area table (SAT) is produced for a prior image and The SAT is produced only once for each image, and greatly simplifies calculating sums for other operations.);
Fernandez fails to explicitly teach and during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
However, Tan explicitly teaches during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image (Fig. 1, Paragraph [0140]- Tan discloses the uncertainty based inconsistency computation computes a cross-modal inconsistency with probability distributions of each true positive pair from each object detection network. The uncertainty based inconsistency computations can be further divided two sub-groups: bounding box and heatmap.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein during each iteration, determine a probabilistic distribution associated with each of the multiple regions based on the integral image.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 23, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 22,
Fernandez explicitly teaches wherein the probabilistic distributions represent normal distribution parameterizations (Fig. 13, Paragraph [0075]- Fernandez discloses in a real-world image with degrees of intensity and imperfect edges, the vertical sum will have an imperfect but characteristic "peak" or local maximum, with a steep rise and fall around the peak, as in FIG. 13., this shows a normal distribution).
Regarding claim 24, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 22,
Fernandez explicitly teaches, wherein the instructions that when executed cause the at least one processor to determine the similarity of the image data contained within the multiple regions during each iteration (Fig. 4, Paragraph [0064]- Fernandez discloses after object 420 is divided into segments 422, a vertical axis 450 is established through the center of shape 420. Intensity sums are taken for both the left side (A) and the right side (B) of segment 422. A difference of these sums is then computed (A-B). The sum is not processor intensive because it is based on SAT 240 (FIG. 2). Shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.).
Fernandez fail to explicitly teach, comprise instructions that when executed cause the at least one processor to determine a divergence between the probabilistic distributions associated with the multiple regions.
However, Tan explicitly teaches comprise instructions that when executed cause the at least one processor to determine a divergence between the probabilistic distributions associated with the multiple regions (Fig. 1, Paragraph [0140]- Tan Discloses the Jensen-Shannon (JS) divergence measures the similarity between two probability distributions, such as those associated with bounding boxes.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Tan comprise instructions that when executed cause the at least one processor to determine a divergence between the probabilistic distributions associated with the multiple regions.
Wherein having Fernandez’s system of object detection particularly for vehicles comprise instructions that when executed cause the at least one processor to determine a divergence between the probabilistic distributions associated with the multiple regions.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Tan are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Tan’s system provides an increase in efficiency of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Tan et al. (US 20230005173 A1), Paragraph [0034]).
Regarding claim 25, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches wherein: the instructions when executed cause the at least one processor to generate, during each iteration, coordinates based on the identified regions within the refined boundary (Fig. 4A, Paragraph [0064]- Fernandez discloses in a practical application, shape 420 is centered on object 410 when the difference is at a minimum, as shown in FIG. 4A.); and
and the instructions that when executed cause the at least one processor to identify the one or more components of the detected object comprise instructions that when executed cause the at least one processor to generate a weighted combination of the coordinates while using the determined similarities as weights for the coordinates (Fig. 2, Paragraph [0082]- Fernandez discloses core 220 then in block 260 performs corner, symmetry, and shadow detection operations on candidate objects in a prior iteration of the image using SAT 240. The operations of block 260 provide a set of candidate objects, which are provided to block 270, where HoG/SVM is performed on the candidate objects. Tracking is implemented in block 290 on all objects classified as vehicles for the current frame.),
the one or more locations of the one or more components of the detected object representing the weighted combination (Fig. 4A, Paragraph [0064]- Fernandez discloses shape 420 is then slid left and right, and contracted and expanded horizontally. With each change, additional differences of sums are taken. In an ideal image, the two sides of object 410 are exactly symmetrical, and the difference of the sums will be exactly zero when shape 420 is centered on object 410 and equal in width.).
Regarding claim 26, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches wherein, during each iteration, the multiple regions within the refined boundary include a first region (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the first region can be the left or right sides)) and
a mirrored second region within the refined boundary (Fig. 1, Paragraph [0062]- Fernandez discloses in block 140, an image is provided, including vehicles or other candidate objects identified in a previous frame. In block 144, a symmetry check is performed to refine detection of left and right sides. This is useful for detecting vehicles because almost all motor vehicles are essentially symmetrical along a vertical axis when viewed from behind. (Wherein the mirrored second region can be the left or right sides whichever the first region is not)).
Regarding claim 27 Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches identify the refined boundary based on an original boundary (Fig. 1, Abstract- Fernandez discloses the RLE and SAT are used to identify candidate objects and to iteratively refine their boundaries),
Fernandez in view of Tan fails to explicitly teach further containing instructions that when executed cause the at least one processor to: determine a position of a vanishing point based on multiple collections of line segments in an image comprising the image data and the original boundary refined based on the vanishing point.
However, Takahashi explicitly teaches further containing instructions that when executed cause the at least one processor to determine a position of a vanishing point based on multiple collections of line segments in an image comprising the image data (Fig. 22, Paragraph [0383]- Takahashi discloses the vanishing point information can be identified using a white line on a road face displayed on a captured image and vehicle operation information. Further in Fig. 22, Paragraph [0388]- Takashi discloses. Further, the y coordinate Vy of the vanishing point can be obtained from the intercept of approximated straight line of the road face obtained by the previous processing.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi further containing instructions that when executed cause the at least one processor to: determine a position of a vanishing point based on multiple collections of line segments in an image comprising the image data and the original boundary refined based on the vanishing point.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein further containing instructions that when executed cause the at least one processor to: determine a position of a vanishing point based on multiple collections of line segments in an image comprising the image data and the original boundary refined based on the vanishing point.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez in view of Tan fails to explicitly teach identify the refined boundary based on an original boundary, the original boundary refined based on the vanishing point.
However, Liu explicitly teaches identify the refined boundary based on an original boundary, the original boundary refined based on the vanishing point (Fig. 6C, Paragraph [0044]- Liu discloses the tracker 330 may scale the bounding box 630 based on a vanishing point 640 or projected lines intersecting the vanishing point 640 in the 2D image frame. The tracker 330 may predict how the bounding box 630 may change as the detected object moves closer toward a provider's vehicle 140, e.g., based on changes in a vertical direction of the image frames.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Takahashi of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Liu identify the refined boundary based on an original boundary, the original boundary refined based on the vanishing point.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein identify the refined boundary based on an original boundary, the original boundary refined based on the vanishing point.
The motivation behind the modification would have been to allow for more efficient tracking of vehicles, since both Fernandez and Liu are both systems for detecting vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Liu’s system provides an increase in efficiency tracking detected vehicles. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Liu et al. (US 20190103026 A1), Paragraph [0029-30]).
Claims 8, 18, and 28 are rejected under 35 U.S.C 103 as being unpatentable over Fernandez et al. (US 20140133698 A1) hereafter referenced as Fernandez in view of Takahashi et al (US 20160014406 A1) hereafter referenced as Takahashi, Liu et al. (US 20190103026 A1) hereafter referenced as Liu, and Tan et al (US 20230005173 A1) hereafter referenced as Tan and further in view of Wilbert et al (US 20180300578 A1) hereafter referenced as Wilbert.
Regarding claim 8, Fernandez in view of Takahashi, Liu, and Tan teaches the method of Claim 1,
Fernandez explicitly teaches, the specified portion of the detected vehicle comprises a rear portion of the vehicle (Fig. 6, paragraph [0002]- Fernandez discloses the present disclosure relates generally to object detection and more particularly to detection of motor vehicles in a video stream for forward collision warning (FCW) systems. It can be seen in the figure that the images being scanned are from the rear.);
Fernandez in view of Tan fails to explicitly teach the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
However, Wilbert explicitly teaches the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle (Fig. 29, paragraph [0188]- Wilbert discloses the system then initiates a plurality of processes, including plate detector 2902, car detector 2911, logo detector 2914, and/or tail light and characteristic detector 2917.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Wilbert the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Wilbert are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Wilbert’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Wilbert et al. (US 20180300578 A1), Paragraph [0101 and 00182]).
Regarding claim 18, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 11,
Fernandez explicitly teaches, the specified portion of the detected vehicle comprises a rear portion of the vehicle (Fig. 6, paragraph [0002]- Fernandez discloses the present disclosure relates generally to object detection and more particularly to detection of motor vehicles in a video stream for forward collision warning (FCW) systems. It can be seen in the figure that the images being scanned are from the rear.);
Fernandez in view of Tan fails to explicitly teach the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
However, Wilbert explicitly teaches the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle (Fig. 29, paragraph [0188]- Wilbert discloses the system then initiates a plurality of processes, including plate detector 2902, car detector 2911, logo detector 2914, and/or tail light and characteristic detector 2917.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Wilbert the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Wilbert are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Wilbert’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Wilbert et al. (US 20180300578 A1), Paragraph [0101 and 00182]).
Regarding claim 28, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches, the specified portion of the detected vehicle comprises a rear portion of the vehicle (Fig. 6, paragraph [0002]- Fernandez discloses the present disclosure relates generally to object detection and more particularly to detection of motor vehicles in a video stream for forward collision warning (FCW) systems. It can be seen in the figure that the images being scanned are from the rear.);
Fernandez fails to explicitly teach the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
However, Wilbert explicitly teaches the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle (Fig. 29, paragraph [0188]- Wilbert discloses the system then initiates a plurality of processes, including plate detector 2902, car detector 2911, logo detector 2914, and/or tail light and characteristic detector 2917.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Wilbert the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein the one or more components of the vehicle comprise at least one of: one or more taillights of the vehicle or a license plate of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Wilbert are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Wilbert’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Wilbert et al. (US 20180300578 A1), Paragraph [0101 and 00182]).
Claims 10, 20, and 30 are rejected under 35 U.S.C 103 as being unpatentable over Fernandez et al. (US 20140133698 A1) hereafter referenced as Fernandez in view of Takahashi et al (US 20160014406 A1) hereafter referenced as Takahashi, Liu et al. (US 20190103026 A1) hereafter referenced as Liu, and Tan et al (US 20230005173 A1) hereafter referenced as Tan and further in view of Lubbe et al (Brake reactions of distracted drivers to pedestrian Forward Collision Warning systems) hereafter referenced as Lubbe.
Regarding claim 10, Fernandez in view of Takahashi, Liu, and Tan the method of Claim 1,
Fernandez explicitly teaches, wherein the identified at least one action comprises at least one of: an adjustment to at least one of (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.):
a braking of the vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
Fernandez and Tan fail to explicitly teach a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
However, Takahashi explicitly teaches a steering of a vehicle (Fig. 1, Paragraph [0112]- Takahashi discloses the vehicle drive control unit 104 performs the cruise assist control such as reporting a warning to a driver of the vehicle 100, and controlling the steering and brakes of the vehicle),
a speed of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.),
an acceleration of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary and identifying, using the at least one processing device, one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez in view of Tan and further in view of Takahashi fails to explicitly teach an activation of an audible, visible, or haptic warning.
Lubbe explicitly teaches an activation of an audible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
visible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
or haptic warning (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan and Takahashi of having a method comprising: obtaining, using at least one processing device, a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; and repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple regions within the refined boundary with the teachings of Lubbe an activation of an audible, visible, or haptic warning.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein an activation of an audible, visible, or haptic warning.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Lubbe are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Lubbe’s system provides an increase in driver reaction based on data obtained. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Lubbe et al. (Brake reactions of distracted drivers to pedestrian Forward Collision Warning systems), Section 1.1 Paragraph [0002-3] and Section 5).
Regarding claim 20, Fernandez in view of Takahashi, Liu, and Tan teaches the apparatus of Claim 11,
Fernandez explicitly teaches, wherein the identified at least one action comprises at least one of: an adjustment to at least one of (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.):
a braking of the vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
Fernandez and Tan fail to explicitly teach a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
However, Takahashi explicitly teaches a steering of a vehicle (Fig. 1, Paragraph [0112]- Takahashi discloses the vehicle drive control unit 104 performs the cruise assist control such as reporting a warning to a driver of the vehicle 100, and controlling the steering and brakes of the vehicle),
a speed of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.),
an acceleration of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having an apparatus comprising: at least one processing device configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary and identify one or more locations of one or more components of the detected object based on the identified regions and the determined similarities with the teachings of Takahashi a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez in view of Tan and further in view Takahashi fails to explicitly teach an activation of an audible, visible, or haptic warning.
Lubbe explicitly teaches an activation of an audible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
visible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
or haptic warning (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having an apparatus comprising: at least one processing device: configured to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary with the teachings of Lubbe an activation of an audible, visible, or haptic warning.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein an activation of an audible, visible, or haptic warning.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Lubbe are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Lubbe’s system provides an increase in driver reaction based on data obtained. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Lubbe et al. (Brake reactions of distracted drivers to pedestrian Forward Collision Warning systems), Section 1.1 Paragraph [0002-3] and Section 5).
Regarding claim 30, Fernandez in view of Takahashi, Liu, and Tan teaches the non-transitory machine-readable medium of Claim 21,
Fernandez explicitly teaches, wherein the identified at least one action comprises at least one of: an adjustment to at least one of (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.):
a braking of the vehicle (Fig. 6, Paragraph [0053]- Fernandez discloses in response to the detection, the FCW system may take an appropriate action such as warning the driver or applying brakes.);
Fernandez and Tan fail to explicitly teach a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
However, Takahashi explicitly teaches a steering of a vehicle (Fig. 1, Paragraph [0112]- Takahashi discloses the vehicle drive control unit 104 performs the cruise assist control such as reporting a warning to a driver of the vehicle 100, and controlling the steering and brakes of the vehicle),
a speed of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.),
an acceleration of the vehicle (Fig. 1, Paragraph [0121]- Takahashi discloses when performing these processing, vehicle operation information such as vehicle speed, acceleration (acceleration in front-to-rear direction of vehicle), steering angle, and yaw rate of the vehicle 100 can be input using the data IF 124, and such information can be used as parameters for various processing. Data output to the external unit can be used as input data used for controlling various devices of the vehicle 100 such as brake control, vehicle speed control, and warning control.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary with the teachings of Takahashi a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein a steering of a vehicle, a speed of the vehicle, and acceleration of the vehicle.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Takahashi are both systems for detecting objects particularly vehicles. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Takahashi’s system provides a further increase in accuracy of obtaining data. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Takahashi et al. (US 20160014406 A1), Paragraph [0009-10]).
Fernandez in view of Tan and further in view Takahashi fails to explicitly teach an activation of an audible, visible, or haptic warning.
Lubbe explicitly teaches an activation of an audible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
visible (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.),
or haptic warning (Fig. 1, Section 3.1 Paragraph [0001]- Lubbe discloses of particular interest to this study were three HMIs: an audio-visual warning, a combination of brake pulse and audio-visual warning, and a novel HUD design highlighting the threat in combination with an audio-visual warning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Fernandez in view of Tan of having a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: obtain a refined boundary identifying a specified portion of a detected object within a scene, the refined boundary associated with image data; repeatedly, during multiple iterations, (i) identify multiple regions within the refined boundary with the teachings of Lubbe an activation of an audible, visible, or haptic warning.
Wherein having Fernandez’s system of object detection particularly for vehicles wherein an activation of an audible, visible, or haptic warning.
The motivation behind the modification would have been to allow for more accurate data to be obtained, since both Fernandez and Lubbe are both systems that use detection of parts of a car. Wherein Fernandez’s system wherein improved the accuracy of the data obtained, while Lubbe’s system provides an increase in driver reaction based on data obtained. Please see Fernandez et al. (US 20140133698 A1), Paragraph [0069] and Lubbe et al. (Brake reactions of distracted drivers to pedestrian Forward Collision Warning systems), Section 1.1 Paragraph [0002-3] and Section 5).
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered
pertinent to applicant`s disclosure.
Lu et al. (US 20180350085 A1)- Embodiments described herein relate generally to determining correspondence between a template and an object in an image. A method may include: receiving an image of an environment including an image of an object within the image of the environment; resizing the first template to obtain a scaled first template having a size corresponding to a size of the image of the object; calculating a number of correspondences between the scaled first template and the image of the object; receiving a candidate homography; testing the candidate homography; and replacing the image of the object with a second template of a different object according to the candidate homography in response to the candidate homography being established as corresponding to the image of the object.............Please see Fig. 1. Abstract.
Lee et al. (US 20220383529 A1)- A method with vanishing point estimation includes: obtaining an image of a current time point of objects comprising a target vehicle; detecting the objects in the image of the current time point; tracking positions of the objects in a world coordinate system by associating the objects with current position coordinates of the objects determined from images of previous time points that precede the current time point; determining a vanishing point for each of the objects based on the positions of the objects; and outputting the vanishing point determined for each of the objects...............Please see Fig. 1. Abstract.
Matsuo et al. (US 20210042945 A1)- An objective of the present invention is, in a stereo camera device, to determine an accurate image position in a direction of progress to detect at an early stage an obstacle or a preceding vehicle on a road. Provided is a stereo camera device for measuring the distance to a solid object from images photographed with a plurality of cameras, said device characterized by: a wide-angle image cropping part for cropping a portion of the images; a distance image cropping part for cropping and enlarging a portion of the images; a road shape determination part for determining a road shape, including slope information, of a road being traveled; and determining, on the basis of the road shape in a prescribed distance, which has been derived with the road shape determination part, the cropping position and/or range of the distance image cropping part...............Please see Fig. 1. Abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUCIUS C.G. ALLEN whose telephone number is (703)756-5987. The examiner can normally be reached Mon - Fri 8-5pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571)272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUCIUS CAMERON GREEN ALLEN/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673