DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application Status
This Final action is in response to applicant’s amendment of 11/24/2025. Claims 1-4, 6-12, and 14-22 are examined and pending. Claims 1, 3-4, 6-8, 10, 12, 14-16, and 20 are currently amended, claims 5 and 13 are cancelled, and claims 21-22 are new.
Response to Arguments
Applicant’s amendments, with respect to the claim objection as set forth in the Office Action have been fully considered and are persuasive. As such, the objection has been withdrawn.
Applicant’s amendment/arguments, with respect to the claim rejection under double patenting as set forth in the Office Action have been fully considered and are not persuasive. Initially, the Examiner notes that applicant filed a terminal disclaimer. As such the rejection of the claims under double patenting is withdrawn.
Applicant’s amendments/arguments with respect to the rejection under 35 USC 112(b) as set forth in the Office Action have been fully considered and are persuasive. As such, the rejection as previously presented has been withdrawn. However, applicant’s amendment raises new rejection addressed below under 35 USC 112(b).
Applicant’s arguments with respect to the rejection under 35 U.S.C. § 103 been fully considered but are moot because the new ground of rejection does not rely on any reference(s) applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s amendments/arguments with respect to the rejection under 35 USC 101 as being directed to an abstract idea without significantly more have been carefully considered and are not persuasive.
Applicant specifically argues the following:
Without conceding the propriety of the rejection, and solely for prosecution efficiency, each of claims 1, 10, and 15 has been amended herein to recite inter alia: wherein the common object comprises a vehicle, the output comprises a vehicle type classification that is a majority vote based on a detected vehicle type classification in the first image and the second image. Support for the amendments is provided in at least in para. [0051] of the specification as- filed.
Applicant respectfully notes that at least the above-recited features of claims 1, 10, and 15, as amended herein, integrate any alleged judicial exception into a practical application and are not well-understood, routine, conventional activities previously known to the industry and amount to significantly more than the judicial exception alleged in the Office action. Therefore, reconsideration and withdrawal of the rejection are respectfully requested.
The examiner has considered the arguments for step 2A prong 2 and respectfully disagree. The generating/outputting step is recited at a high level of generality (i.e., as a general action or change being taken based on the results of the determining/planning step(s)) and amount to mere post solution actions, which is a form of insignificant extra-solution activity. Thus, the claims as presented are directed to an abstract idea without significantly more. As such, the rejection of claims under USC 101 is maintained herein. See detailed analysis of rejection under 35 USC 101 below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claims 1, 10, and 15, the recited limitation “a common object” is indefinite. It is unclear to the examiner what does common object mean? is common object referring to a same object detected in first image and second image or is it referring to a well known object?
In claim 3 line 3, the recited limitation “an image” is indefinite. It is unclear to the examiner if this is the first image or second image or is a different image?
In claims 5, 13, and 20, the recited limitation “similar direction” is indefinite. This limitation is relative term, what are the boundaries of similar direction? Is it a same direction or within boundaries of the same direction?
In claim 14, the recited limitation “wherein the common object is a vehicle, a pedestrian, a structure adjacent to roadways” is indefinite. it is unclear to the examiner if the common objects are at least one of these or all of these examiner interpret this limitation in which the common object is a vehicle, a pedestrian, or a structure adjacent to roadways.
Claims 2, 4, 6-9, 11-12, and 16-19 are rejected for being dependent upon a rejected claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6-12, 14-22 are rejected under 35 U.S.C. 101 because the claimed invention is not directed to patent eligible subject matter.
101 Analysis
Based upon consideration of all of the relevant factors with respect to the claim as a whole, the claim is determined to be directed to an abstract idea. The rationale for this determination is explained below:
When considering subject matter eligibility under 35 U.S.C. § 101 under the 2019 Revised Patent Subject Matter Eligibility Guidance, the Office is charged with determining whether the scope of the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1).
If the claim falls within one of the statutory categories (Step 1), the Office must then determine the two-prong inquiry for Step 2A whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea), and if so, whether the claim is integrated into a practical application of the exception.
Claims 1-4, 6-12, 14-22 are rejected under 35 U.S.C. 101 because the claim invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1: Statutory Category
The independent claims 1 and 10 are rejected under 35 USC §101 because the claimed invention is directed to a process and machine respectively, which are statutory categories of invention (Step 1: Yes).
101 Analysis – Step 2A Prong 1: Judicial Exception Recited
The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). The abstract idea falls under “Mental Processes” Grouping. The independent claims recite identifying a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box are associated with metadata providing information corresponding to a same object; identify, based on metadata in the images, a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box correspond to a same object. These limitation(s), as drafted, is (are) a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than reciting “a detection algorithm stored in a processor”. The claim limitations encompass a person looking at different types of data such as images data and metadata of images, and vehicle type classification data could identify a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box are associated with metadata providing information corresponding to a common object; identify, based on metadata in the images, a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box correspond to a common object. The mere nominal recitation of “a detection algorithm stored in a processor” does not take the claim limitation(s) out of the mental process grouping and merely function to automate the generating steps. Thus, the claims recite a mental process. (step 2A – Prong 1: Judicial exception recited: Yes).
101 Analysis – Step 2A Prong 2: Practical Application
The independent claims recite the additional limitations/elements of receiving images from one or more sensors installed in the vehicle; generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image; wherein the same object comprises a vehicle, the output comprises a vehicle type classification that is a majority vote based on a detected vehicle type classification in the first image and the second image; a plurality of sensors; processor; and a memory with instructions thereon, applying, on the images, a detection algorithm stored in the processor; and a computer-readable storage medium having code stored thereon. The receiving step is recited at a high level of generality (i.e. receiving/collecting various data (images metadata, etc.) and amount to mere data gathering, which is a form of insignificant extra-solution activity. The generating and/or outputting step which includes the output to comprise a vehicle type classification that is a majority vote is recited at a high level of generality (i.e. as a general action or change being taken based on the results of the generating step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. The plurality of sensors are recited at a high level of generality (claimed generically) and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim(s) is/are not more than a drafting effort designed to monopolize the exception. The additional limitation(s) of a processor; and a memory with instructions thereon, applying, on the images, a detection algorithm stored in the processor; and a computer-readable storage medium having code stored thereon are recited at a high level of generality and merely function to automate the generating steps.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim(s) is/are directed to the abstract idea (Step 2A—Prong 2: Practical Application?: No).
101 Analysis – Step 2B: Inventive Concept
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than insignificant extra-solution activity.
Under the 2019 PEG, a conclusion that an additional element/limitation is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving, generating, and applying steps/additional elements were considered to be extra-solution activities in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication that these steps are performed by anything other than conventional components performing the conventional activity (steps) of the claim. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer. The claim is ineligible (Step 2B: Inventive Concept?: No).
Dependent claims 2-9, 11-14, and 16-20 do not include any other additional elements that are sufficient to amount to significantly more than the judicial exception. Therefore, the Claims 1-20 are rejected under 35 U.S.C. §101 as being directed to non-statutory subject matter.
Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because claim 15-20 are directed to a computer readable medium which can encompass non-statutory transitory forms of signal transmission. See In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). The Examiner suggests amending the claims to specify that the computer readable medium is a non-transitory computer readable medium.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shinya (US 20190026923 A1) in view of Nambi (NAMBI, A. et al., "FarSight: A Smartphone-based Vehicle Ranging System", Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 2, Issue 4, Article No. 181, 27 December 2018, pages 1-22, DOI: 10.1145/328705 [1]) in view of Miyamaki et al (US 20200211158 A1) in view of Huang et al (US 20190057260 A1).
With respect to claim 1, Shinya discloses a method implemented by a processor disposed in a vehicle, the method comprising: receiving images from one or more sensors installed in the vehicle (see at least [0031-0036]); identify a first bounding box in a first image and a second bounding box in a second image (see at least [00545], [0140], [Figs. 6-7]), wherein the first bounding box and the second bounding box are associated with metadata providing information corresponding to a same object (see at least [0003] and [0144-0145]).
However, Shinya do not specifically disclose applying, on the images, a detection algorithm stored in the processor to perform the identifying step.
Nambi teaches applying, on the images, a detection algorithm stored in the processor to perform the identifying step (see at least [pages 8 and 10], “Each such bounding box is passed on as the input to the tracking algorithm…”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya, with a reasonable expectation of success to incorporate the teachings of Nambi of applying, on the images, a detection algorithm stored in the processor to perform the identifying step. This would be done to improve estimating the separation distance between vehicles with higher accuracy based on detections and thus increase safety of vehicle driving (see Nambi page 2).
Moreover, Shinya as modified by Nambi do not specifically teach generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image.
Miyamaki teaches generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image (see at least [0209] and [0283]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya as modified by Nambi, with a reasonable expectation of success to incorporate the teachings of Miyamaki of generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image. This would be done to allow easy performance of stitching or mapping of images (see Miyamaki para 0007).
Shinya as modified by Nambi and Miyamaki do not specifically teach wherein the same object comprises a vehicle, the output comprises a vehicle type classification that is a majority vote based on a detected vehicle type classification in the first image and the second image.
Huang teaches wherein the same object comprises a vehicle (see at least [0030], [0039-0040], [0052], and [claim 1]), the output comprises a vehicle type classification that is a majority vote based on a detected vehicle type classification in the first image and the second image (see at least [0030], [0039-0040], [0052], and [claim 1]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya as modified by Nambi and Miyamaki, with a reasonable expectation of success to incorporate the teachings of Huang wherein the same object comprises a vehicle, the output comprises a vehicle type classification that is a majority vote based on a detected vehicle type classification in the first image and the second image. This would be done to allow improve detection of vehicle and increase accuracy of classifying a detected vehicle, this help improves monitoring of traffic flow and other functions.
With respect to claim 2, Shinya discloses fusing the first image and the second image based on the metadata (see at least [0003] and [0144-0145]).
With respect to claim 3, Shinya discloses before the applying of the detection algorithm: selecting and cropping one or more regions of interest in at least one of the images (see at least [0050-0057]).
With respect to claim 4, Shinya discloses wherein the first image and the second image are from a same sensor; or the first image and the second image are different sensors facing a direction within boundaries of a same direction (see at least [0178]).
With respect to claim 6, Shinya discloses wherein the same object is a vehicle (see at least [0003] and [0144-0145]), and wherein the metadata of a bounding box comprises at least one of a vehicle feature vector, a taillight signal detection result or a vehicle segmentation mask corresponding to the vehicle detected in the first bounding box or the second bounding box (see at least [0003] and [0144-0145]).
With respect to claim 9, Shinya do not specifically disclose wherein the applying the detection algorithm provides detection outputs having a unified focal plane.
Nambi teaches wherein the applying the detection algorithm provides detection outputs having a unified focal plane (see at least [pages 8 and 10]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya, with a reasonable expectation of success to incorporate the teachings of Nambi of applying, on the images, a detection algorithm stored in the processor to perform the identifying step. This would be done to improve estimating the separation distance between vehicles with higher accuracy based on detections and thus increase safety of vehicle driving (see Nambi page 2).
With respect to claims 10 and 12 they are apparatus claims that recite substantially the same limitations as the respective method claims 1 and 4. As such, claims 10 and 12 are rejected for substantially the same reasons given for the respective method claims 1 and 4 and are incorporated herein.
With respect to claim 11, Shinya discloses wherein the metadata includes at least one of 2D or 3D detection results, a vehicle-type classification, a vehicle identification, a taillight signal detection results, or a vehicle segmentation mask (see at least [0003] and [0144-0145]).
With respect to claim 14, wherein the same object further comprises at least one of a pedestrian, a structure adjacent to roadways (see at least [0003] and [0144-0145]).
With respect to claims 15 and 20, they are computer-readable storage medium claims that recite substantially the same limitations as the respective method claims 1, 4, and 5. As such, claims 15 and 20 are rejected for substantially the same reasons given for the respective method claims 1, 4, and 5 and are incorporated herein.
With respect to claim 16, Shinya discloses wherein the metadata of the first bounding box comprises at least one of an object feature vector, a taillight signal detection result or an object segmentation mask corresponding to the same object detected in the first bounding box (see at least [0003] and [0144-0145]).
With respect to claim 17, Shinya discloses wherein the metadata of the first bounding box in the first image further comprises at least one of a camera pose, a focal length, a shutter speed or a field-of-view associated with a sensor that has generated the first image (see at least [0003] and [0144-0145]).
With respect to claim 18, Shinya discloses wherein the object feature vector comprises a color of an object or a make of the object (see at least [0003] and [0144-0145]).
With respect to claim 19, Shinya discloses wherein the object segmentation mask comprises one or more contours of an object (see at least [0003] and [0144-0145]).
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Shinya (US 20190026923 A1) in view of Nambi (NAMBI, A. et al., "FarSight: A Smartphone-based Vehicle Ranging System", Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 2, Issue 4, Article No. 181, 27 December 2018, pages 1-22, DOI: 10.1145/328705 [1]) in view of Miyamaki et al (US 20200211158 A1) in view of Huang et al (US 20190057260 A1) in view of Lee (US 20190354786 A1).
With respect to claim 7, Shinya as modified by Nambi, Miyamaki, and Huang do not specifically teach wherein the first image comprises a left taillight and a right taillight of the same vehicle, wherein the second image comprises exactly one taillight of the same vehicle, and wherein the output comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight.
Lee teaches wherein the first image comprises a left taillight and a right taillight of the same vehicle (see at least [0011] and [0023]), wherein the second image comprises exactly one taillight of the same vehicle (see at least [0011] and [0023]), and wherein the output comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight (see at least [0011] and [0023]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya as modified by Nambi, Miyamaki, and Huang, with a reasonable expectation of success to incorporate the teachings of Lee wherein the first image comprises a left taillight and a right taillight of the same vehicle, wherein the second image comprises exactly one taillight of the same vehicle, and wherein the output comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight. This would be done to streamline decision making, allowing for safer driving.
With respect to claim 8, Shinya as modified by Nambi, Miyamaki, and Huang wherein the first image comprises a first vehicle segmentation mask corresponding to the same vehicle, wherein the second image comprises a second vehicle segmentation mask corresponding to the same vehicle, and wherein the output comprises a vehicle segmentation mask based on a convex combination of the first vehicle segmentation mask and the second vehicle segmentation mask.
Lee teaches wherein the first image comprises a first vehicle segmentation mask corresponding to the same vehicle (see at least [0011] and [0023]), wherein the second image comprises a second vehicle segmentation mask corresponding to the same vehicle (see at least [0011] and [0023]), and wherein the output comprises a vehicle segmentation mask based on a convex combination of the first vehicle segmentation mask and the second vehicle segmentation mask (see at least [0011] and [0023]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Shinya as modified by Nambi, Miyamaki, and Huang with a reasonable expectation of success to incorporate the teachings of Lee wherein the first image comprises a first vehicle segmentation mask corresponding to the same vehicle, wherein the second image comprises a second vehicle segmentation mask corresponding to the same vehicle, and wherein the output comprises a vehicle segmentation mask based on a convex combination of the first vehicle segmentation mask and the second vehicle segmentation mask. This would be done to streamline decision making, allowing for safer driving.
Conclusion
Applicant’s amendment necessitated the new ground of rejection presented in the office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDALLA A KHALED whose telephone number is (571)272-9174. The examiner can normally be reached on Monday-Thursday 8:00 Am-5:00, every other Friday 8:00A-5:00AM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached on (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDALLA A KHALED/Examiner, Art Unit 3667