DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-11 are pending and of those, claims 1-5 examined and claims 6-11 are withdrawn in view of election by original presentation below. This action is in response to the claims filed 11/12/25.
Foreign Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
Applicant’s arguments, see Applicant Remarks 35 U.S.C. § 101 filed on 11/12/25, regarding 35 U.S.C. § 101 rejections have been fully considered and are not found persuasive.
Applicant’s remarks, pgs. 9-11, asserts converting images into future images and displaying the images is sufficient additional elements directed to a particular improvements in display stability with specific recitation to Subject Matter Eligibility examples 42 and 39 regarding converting information formats and transforming digital images. However, there is not any recitation in the claims regarding the level of specificity of converting the images into future images or what is transformed. The claims simply takes an image, predicts the future positioning of the vehicle utilizing the image, and then converting it into a future image. The vehicle may be stationary, taking a picture of the storefront wall in front of it which is not predicted to change and then displaying the same image as the prediction requires no changes which is capable of being done within the human mind. While this example of the claims being able to be performed in the human mind is not limiting in scope, it does render the claims still interpreted as an abstract idea.
As stated in the previous rejection and reiterated below, including explicit physical control implementation or user inputs may help overcome this rejection.
Therefore, the rejections are maintained.
Applicant’s arguments, see Applicant Remarks 35 USC § 102 and 35 USC § 103. filed on 11/12/25, regarding 35 USC § 102 and 35 USC § 103 rejections are persuasive in view of amendments filed 11/12/25.
However, upon further consideration, new grounds of rejection are made in view of further citations to the art of record below.
Election/Restrictions
Newly submitted claims 6-11 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons:
Inventions (I Invention (II) defines the reference time as a time at which the images are captured as in claim 6, and Invention (III) defines the reference time as a counter that is pre-synchronized with the mobile body and increments over time as in claim 7 are directed to related processes. The related inventions are distinct if: (1) the inventions as claimed are either not capable of use together or can have a materially different design, mode of operation, function, or effect; (2) the inventions do not overlap in scope, i.e., are mutually exclusive; and (3) the inventions as claimed are not obvious variants. See MPEP § 806.05(j). In the instant case, the inventions as claimed (1) cannot be used together as each different reference time would alter the function of the other inventions, (2) are mutually exclusive based on the different definitions of the reference times, and (3) are not obvious variants as a reference time can be any related time associated with a process. Furthermore, the inventions as claimed do not encompass overlapping subject matter and there is nothing of record to show them to be obvious variants.
Invention (IV), which generates prediction information utilizing position and orientation information of the camera at a point in the future as seen in claim 1, and Invention (V) , which generates prediction information utilizing an operation state of a driving device of the mobile body including explicit control positions of the vehicle as seen in claims 8-11, are related as combination and subcombination. Inventions in this relationship are distinct if it can be shown that (1) the combination as claimed does not require the particulars of the subcombination as claimed for patentability, and (2) that the subcombination has utility by itself or in other combinations (MPEP § 806.05(c)). In the instant case, the combination as claimed does not require the particulars of the subcombination as claimed because the requirement for operation state of the vehicle including specific control input positions such as “an amount of depression of an accelerator pedal, a steering angle of a steering wheel, an amount of depression of a brake pedal, and a position of a shift lever” are not required in combination in order to generate prediction information. The subcombination has separate utility such as it includes significantly more processing of control input positioning in predicting the future movements of the vehicle as a dynamic system rather than in the combination which solely focuses on environmental cues relative to current positioning information for generating prediction information. The subcombination, Invention (V), requires significantly more complex information collection and processing relative to the combination, Invention (IV).
The examiner has required restriction between combination and subcombination inventions. Where applicant elects a subcombination, and claims thereto are subsequently found allowable, any claim(s) depending from or otherwise requiring all the limitations of the allowable subcombination will be examined for patentability in accordance with 37 CFR 1.104. See MPEP § 821.04(a). Applicant is advised that if any claim presented in a divisional application is anticipated by, or includes all the limitations of, a claim that is allowable in the present application, such claim may be subject to provisional statutory and/or nonstatutory double patenting rejections over the claims of the instant application.
Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 6-11 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03.
To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention.
Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
The claims discuss a device that falls under a machine in Step 1.
In Step 2A, Prong One, the device falls under an abstract idea as a mental process with nothing more than a generic computer.
Simply collecting information, analyzing it and displaying certain results can be performed within a human mind even with the use of a generic computer does not recite any additional elements to integrate the judicial exception into a practical application in Step 2A, Prong Two. See MPEP § 2106.04(a)(2).
In Step 2B the claim does not recite additional claim elements that can amount to significantly more to overcome the Judicial Exception. Receiving video information is insignificant presolutionary activity, generating/converting a plurality of future frames is a mental process, and displaying the results is insignificant postsolutionary activity.
MPEP 2106.04(a)(2)(III)(A) discloses the following:
In contrast, claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include:
• a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016);
Therefore, the claim is not eligible subject matter.
Including claim amendments to recite some form of physical control implementation such as physically controlling the vehicle to move towards the predicted future positioning or receiving input from the operator may overcome the rejection.
Dependent claims do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims are not patent eligible under the same rationale as provided for in the rejection of Claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kusano et al. (US 2018/0259349) in view of Kwan et al. (US 2020/0064862).
Regarding claims 1 and 5, Kusano discloses an autonomous vehicle virtual reality navigation system including a device/method for assisting remote driving of a mobile body by a remote operator, the method comprising: communicating with the mobile body via a network (Abstract, ¶27, and ¶34-37 – control system/virtual reality display can communicate wirelessly with the vehicle corresponding to the recited remote driving of a mobile vehicle by a remote operator which communicates with the mobile body via a network);
receiving, from the mobile body, video information including a plurality of images captured by a camera of the mobile body and each reference time for each image of the plurality of the images, the plurality of images being images of surroundings of the mobile body (¶24, ¶37-41, and Fig. 6 – element S605 and a plurality of sensors including cameras continuously record sensor data corresponding to the recited video information including a plurality of images acquired by a camera of the mobile body where the sensor data corresponding to the recited images of surroundings of the mobile body are utilized to determine the current position of the vehicle and in the generation of the vehicle being displayed consistently at the predetermined amount of time in the future implicitly requires the sensor data to have reference times for each of the plurality of images in order to continuously generate accurate predicted video of the vehicle driving at the predetermined amount of time in the future is being displayed):
generating prediction information on a position and orientation of the camera at a point in a future by a set amount of time from each reference time of the images, respectively, the set amount being set based on a history of a communication delay time; converting, respectively, the images into a plurality of future images based on the prediction information, the plurality of future images being images of surroundings of the mobile body at the point in the future; and displaying the future images on a display of a terminal operated by the remote operator at a time when the set amount of time passes from each reference time of the images, respectively (¶23-24, ¶37-41, Figs. 4A-4B, and Fig. 6 – element S610-S615 discloses the position of the virtual reality autonomous vehicle corresponding to the recited position and orientation as shown in Figs. 4A-4B and is updated continuously as if a video of the vehicle driving at the predetermined amount of time in the future is being displayed corresponding to the recited point in a future by a set amount of time from each reference time of the images where the predetermined amount of time is set based on the output from the sensors such that the operator can view and respond to the upcoming road segments before reaching them corresponding to the recited communication delay time where the virtual reality video at the predetermined amount of time in the future corresponding to the recited converted images into future images based on the prediction information and displaying the future surroundings of the vehicle on a display terminal operated by the remote operator at the predetermined amount of time in the future corresponding to the recited the set amount of time passes from each reference time of the images).
While Kusano implicitly discloses reference times associated with images in order to store and utilize the sensor data it does not explicitly disclose the use of such.
However, Kwan discloses an autonomous vehicle information processing system including the time synchronization hub device includes one or more transmit (TX) timestamp generators coupled to a time source, where the TX timestamp generators generate TX timestamps based on a time obtained from the time source to provide the TX timestamps to one or more of the sensors indicating a time the sensors transmit sensor data to the host system via the host interface. (¶31-32).
The combination of the projected vehicle movement display of Kusano with the specific sensor data transmission timestamp of Kwan fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the projected vehicle movement display of Kusano with the specific sensor data transmission timestamp of Kwan in order to improve the accuracy and efficiency of the motion planning and control (Kwan - ¶3).
Regarding claim 2, Kusano further discloses predetermined amount of time in the future from the current time as well as an adjustable predetermined amount of time (¶32 and Figs. 4A-B) and a communication network (¶37) but does not explicitly disclose the reference time includes a time at which each of the frames is externally transmitted from a terminal of the mobile body.
However, Kwan further discloses wherein each reference time of the images includes a time at which a corresponding one of the images is externally transmitted from a terminal of the mobile body via the network (¶31-32 - The time synchronization hub device includes one or more transmit (TX) timestamp generators coupled to a time source, where the TX timestamp generators generate TX timestamps based on a time obtained from the time source to provide the TX timestamps to one or more of the sensors indicating a time the sensors transmit sensor data to the host system via the host interface.).
The combination of the projected vehicle movement display of Kusano with the specific sensor data transmission timestamp of Kwan fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the projected vehicle movement display of Kusano with the specific sensor data transmission timestamp of Kwan in order to improve the accuracy and efficiency of the motion planning and control (Kwan - ¶3).
Regarding claim 3, Kusano further discloses further comprising performing projection transformation on each of the images based on the prediction information (¶27-34 and Figs. 4A-B – display the autonomous vehicle at the predetermined amount of time in the future in virtual reality where the displayed future position of the vehicle includes position and orientation and determined utilizing that information corresponding to the recited alteration processing includes processing of performing projection transformation on each of the frames based on the information regarding the position and orientation of the camera).
Regarding claim 4, Kusano further discloses further comprising superimposing an auxiliary image based on the prediction information on each of the images captured by the camera (¶27-34 and Figs. 4A-B – display the autonomous vehicle at the predetermined amount of time in the future in virtual reality where the displayed future position of the vehicle includes position and orientation and determined utilizing that information corresponding to the recited alteration processing includes processing of performing projection transformation on each of the frames based on the information regarding the position and orientation of the camera where the displayed vehicle position is a superimposed image based on that information).
Additional References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hong et al. (US 2022/0092983) discloses a trajectory prediction system for autonomous vehicles including utilizing time stamps on reference frames (¶152)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew J Reda whose telephone number is (408)918-7573. The examiner can normally be reached Monday - Friday 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW J. REDA/ Primary Examiner, Art Unit 3665