DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed (on 21 October 2025) in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 September 2025 has been entered.
Response to Arguments
Applicant's arguments filed 22 September 2025 have been fully considered but they are persuasive only in part.
First, the duplicate claim warning is obviated by applicant’s amendments.
Second, the claim objection is overcome by applicant’s claim amendments.
Third, the rejection under 35 U.S.C. 112(a), description requirement, is rendered moot by applicant’s claim amendments, and a new rejection in this respect is instituted.
In this respect, applicant preemptively argues:
“By specifying that the controller is conducting pattern recognition of the data from the at least one sensor, the claim recites a specific type of algorithm/procedure. Additionally, specifying that the generating of the output by utilizing the trained computer model is based on the at least one feature further specificity is provided regarding generation of the output. Support for the foregoing can be found in at least paragraphs [0036] and [0049] of the Pending Application. With regard to controlling the presentation of the at least one first image on the Pending Application.”
Paragraphs [0036] and [0049] of the pending application indicate this:
[0036] In one embodiment, the controller 107 performs data intensive, in-memory processing using data and/or instructions organized in memory 109 or otherwise organized in the autonomous vehicle 103. For example, the controller 107 can perform a real-time analysis of a set of data collected and/or stored in the autonomous vehicle 103. For example, in some applications, the autonomous vehicle 103 is connected to real-time sensors 106 to store sensor inputs; and the processors of the controller 107 are configured to perform machine learning and/or pattern recognition based on the sensor inputs to support an artificial intelligence (AI) system that is implemented at least in part via the autonomous vehicle 103 and/or the server 101.
[0049] In one embodiment, the collected data comprises image data, and analyzing the collected data comprises performing facial recognition on the image data to identify facial features of at least one passenger of the autonomous vehicle. In one embodiment, the performing the facial recognition comprises extracting features from an image of a face of a first passenger to determine an emotional state of the first passenger.
However, while these passages mention pattern/facial recognition (as the only apparent support for pattern or facial recognition in the specification), they do not support the claim amendments e.g., that the “pattern recognition of the data [is] based on inputting the data into a trained computer model configured to identify at least one feature associated with the autonomous vehicle”, with no features “associated with” the vehicle being described or clarified in the specification, and only facial features being identified. Accordingly, applicant’s arguments are not convincing.
Fourth, in view of the claim amendments, new rejections under 35 U.S.C. 112(b) are instituted by the examiner.
Fifth, while the claims are largely indefinite and unsupported by the original specification, applicants arguments as to the claims distinguishing over the applied prior art under 35 U.S.C. 103 are not convincing.
In this respect, applicant argues:
Notably, however, Szczerba’s identification of patterns in the vehicle’s operation via monitoring is not the same as conducting pattern recognition on data specifically based on inputting the data into a trained computer model to identify at least one feature associated with an autonomous vehicle.
The examiner below shows how the extraction and classification of “features or patterns in the data indicative of an object in the patch” (e.g., paragraph [0123]), etc. in Szczerba et al. (‘595), together with the “pattern recognition” and “image recognition” in Szczerba et al. (‘595) and the object recognition in Kim et al. (’492) at paragraphs [0030] and [0041] satisfy this limitation, as best understood by the examiner. Accordingly, applicant’s arguments are not persuasive in this respect.
Next, applicant argues”:
Szczerba fails to disclose specifically using a trained computer model to generate an output based on at least one feature that is identified through pattern recognition on sensor data. Additionally, Szczerba fails to disclose the foregoing in the context of controlling presentation on a windshield of an autonomous vehicle.
The examiner below shows how the converted (visibility reference value satisfying) image in Kim et al. (‘492), as at paragraphs [0166], [0167], FIGS. 1, 3, 4, 6, 7, etc., and the object classification, track, update, etc. in Szczerba et al. (‘595) at e.g., FIGS. 17, 18, 38, etc. satisfy this limitation. Accordingly, applicant’s arguments are not persuasive in this respect.
Lastly, applicant argues:
“Szczerba fails to disclose controlling presentation specifically based on both the output of a computer model generated based on at least one feature identified by the computer model based on pattern recognition, and also the mode determined for the autonomous vehicle.”
However, the examiner below shows how the applied references meet this indefinite and unsupported limitation, as best understood. Accordingly, applicant’s arguments are not persuasive in this respect.
Sixth, in view of the claim amendments, the examiner chooses to withdraw the nonstatutory double patenting rejections.
Accordingly, applicant’s arguments are only persuasive in part.
Claim (Specification) Objections
Claim 30 is objected to because of the following informalities: in claim 30, line 2, it appears “the vehicle” should read, “the autonomous vehicle”, e.g., for consistency within the claim set (cf. claims 21 and 24). Appropriate correction or reasoned traversal is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: the “controller” in claim 21, 34, and 45.1
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21 to 47 and 49 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding independent claims 21, 34, and 45, applicant has not previously described, in sufficient detail, by what algorithm(s)2, or by what steps or procedure3, he “conduct[ed] pattern recognition of the data based on inputting the data into a [trained] computer model configured to identify at least one feature associated with the [] vehicle” and generated an output utilizing the model based on the vehicle feature. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope4 of the claimed invention, but has, if anything, only described a desired result.
In particular, the computer model and its output is described in this way in applicant’s published specification (see also FIG. 1):
[0015] In various embodiments, the systems and methods for controlling a display device of an autonomous vehicle described below are used to determine a manner of control for a display device (e.g., determine an appropriate or desirable manner and/or time in which to present images). Some embodiments describe how to configure and/or control various aspects of a display device when presenting images to passengers. Other embodiments describe the use of sensors and other inputs provided to machine learning or other computer models that provide one or more outputs used to control various aspects of the operation of a display device in an autonomous vehicle.
[0024] In one embodiment, the controller 107 analyzes the collected data from the sensors 106. The analysis of the collected data includes providing some or all of the collected data as one or more inputs to a computer model 112. The computer model 112 can be, for example, an artificial neural network trained by deep learning. In another example, the computer model is a machine learning model that is trained using training data 114. The computer model 112 and/or the training data 114 can be stored, for example, in memory 109.
[0026] For example, the received data may include data collected from sensors of autonomous vehicles other than autonomous vehicle 103. This data may be included, for example, in training data 114 for training of the computer model 112. The received data may also be used to update a configuration of a machine learning model stored in memory 109 as computer model 112.
[0042] At block 203, the data collected from the one or more sensors is analyzed. This analysis may include providing some or all of the collected data as an input to a computer model. For example, the computer model may be stored in memory 109 and implemented by the controller 107 of FIG. 1, as was discussed above.
[0044] The control of the display devices may include, for example, performing one or more actions by the controller 107 based on one or more outputs from the computer model 112. These actions may include, for example, control of the configuration of the display device 108. This control may include, for example, changing a state of the display device 108 from a transparent state to an opaque state. The opaque state is, for example, a state in which the display device, or a surface thereof, is suitable for the presentation of images to the driver and/or passengers.
[0045] In one embodiment, the method includes collecting, by at least one processor, data from at least one sensor in an autonomous vehicle; analyzing, by the at least one processor, the collected data from the at least one sensor, the analyzing comprising providing the collected data as an input to a computer model; and controlling, based on the analyzing the collected data, a display device of the autonomous vehicle, wherein the controlling comprises performing an action based on an output from the computer model.
[0047] In one embodiment, the method further comprises training the computer model using at least one of supervised or unsupervised learning, wherein the training is done using training data including at least a portion of the collected data.
[0055] In one embodiment, a system for an autonomous vehicle used with the above methods includes: one or more sensors; a display device(s); at least one processor; and memory storing instructions configured to instruct the at least one processor to: collect data from the at least one sensor; analyze the collected data, wherein the analyzing comprises providing the data as an input to a machine learning model; and control, based on the analyzing the collected data, the display device, wherein the controlling comprises performing an action based on at least one output from the machine learning model.
[0058] In one embodiment, the display device comprises at least one window of the autonomous vehicle, and controlling the display device comprises changing a state of the at least one window to a transparent state that permits passenger viewing outside of the autonomous vehicle, wherein the instructions are further configured to instruct the at least one processor to: based on the at least one output from the machine learning model, select a route for controlling navigation of the autonomous vehicle.
[0059] In one embodiment, the system further comprises a communication interface configured to: wirelessly transmit the collected data to a computing device; and receive training data from the computing device; wherein a configuration of the machine learning model is updated using the training data.
Pattern or facial recognition, and the identification of “facial features” from an image of a face of a passenger as the only “features” specifically identified in the application (see also original claim 13) and not commensurate with the full scope of the claimed invention, is described this was at published paragraphs [0036] and [0049]:
[0036] In one embodiment, the controller 107 performs data intensive, in-memory processing using data and/or instructions organized in memory 109 or otherwise organized in the autonomous vehicle 103. For example, the controller 107 can perform a real-time analysis of a set of data collected and/or stored in the autonomous vehicle 103. For example, in some applications, the autonomous vehicle 103 is connected to real-time sensors 106 to store sensor inputs; and the processors of the controller 107 are configured to perform machine learning and/or pattern recognition based on the sensor inputs to support an artificial intelligence (AI) system that is implemented at least in part via the autonomous vehicle 103 and/or the server 101.
[0049] In one embodiment, the collected data comprises image data, and analyzing the collected data comprises performing facial recognition on the image data to identify facial features of at least one passenger of the autonomous vehicle. In one embodiment, the performing the facial recognition comprises extracting features from an image of a face of a first passenger to determine an emotional state of the first passenger.
However, no algorithm(s) or steps/procedure for the controller conducting “pattern recognition” based on sensor data inputted into a computer model that is configured to identify any or all features as may be “associated with the vehicle” as encompassed and covered by the claim (as opposed to mere facial features of a passenger) is/are apparently described, in sufficient detail, in the specification, and no algorithm(s) by which any or all “computer model[s]” as are covered by the claim (e.g., for example only, object recognition models, weather prediction models, pandemic models, asteroid strike models, predictive path models, etc.) would have been used to identify any or all non-descript “feature[s] associated with [a] vehicle” (e.g., perhaps weight, fuel efficiency, range, tinted glass, bucket seats with lumbar supports, cup holders, a 10 year/100,000 mile warranty, vehicle stability control (VSC), a lane departure warning system, free tire rotations, etc.), with only facial features being indicated as identified in the specification, is/are apparently described, in sufficient detail, in the specification. For example, particularly how (e.g., by what algorithm(s) or steps/procedure) was an object detection model or a weather prediction model, or even a “machine learning model” as indicated at published paragraph [0055] of the specification, utilized by applicant to identify any or all features associated with the (autonomous) vehicle, such as a heated steering wheel or a locking gas tank cap or a warranty, etc., as are/may be covered by the claim? Also, no algorithm(s) or steps procedures for identifying any or all “feature[s] associated with the [] vehicle” or for generating any or all output(s) of a computer model based on the at least one feature is/are apparently described, in sufficient detail, in the specification. No identifying of features associated with the vehicle or generation of model output(s) based on feature(s) is apparently described, in sufficient detail. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope of the claimed invention, but has only, if anything, described a desired result.
Regarding independent claims 21, 34, and 45, applicant has not previously described, in sufficient detail, by what algorithm(s), or by what steps or procedure, he controlled presentation/display of a transformed first image “based on the [computer model] output and based on a mode determined for the [autonomous] vehicle”. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope of the claimed invention, but has, if anything, only described a desired result.
In this respect, paragraphs [0043] to [0045], [0055], [0056], and [0058] of the published specification indicate:
[0043] At block 205, one or more display devices are controlled based on the analysis of the collected data. For example, images can be generated and presented on one or more display devices 108 of FIG. 1.
[0044] The control of the display devices may include, for example, performing one or more actions by the controller 107 based on one or more outputs from the computer model 112. These actions may include, for example, control of the configuration of the display device 108. This control may include, for example, changing a state of the display device 108 from a transparent state to an opaque state. The opaque state is, for example, a state in which the display device, or a surface thereof, is suitable for the presentation of images to the driver and/or passengers.
[0045] In one embodiment, the method includes collecting, by at least one processor, data from at least one sensor in an autonomous vehicle; analyzing, by the at least one processor, the collected data from the at least one sensor, the analyzing comprising providing the collected data as an input to a computer model; and controlling, based on the analyzing the collected data, a display device of the autonomous vehicle, wherein the controlling comprises performing an action based on an output from the computer model.
[0055] In one embodiment, a system for an autonomous vehicle used with the above methods includes: one or more sensors; a display device(s); at least one processor; and memory storing instructions configured to instruct the at least one processor to: collect data from the at least one sensor; analyze the collected data, wherein the analyzing comprises providing the data as an input to a machine learning model; and control, based on the analyzing the collected data, the display device, wherein the controlling comprises performing an action based on at least one output from the machine learning model.
[0056] In one embodiment, the display device comprises a liquid crystal display, and performing the action comprises generating at least one image for display by the liquid crystal display for viewing by a passenger of the autonomous vehicle.
[0058] In one embodiment, the display device comprises at least one window of the autonomous vehicle, and controlling the display device comprises changing a state of the at least one window to a transparent state that permits passenger viewing outside of the autonomous vehicle, wherein the instructions are further configured to instruct the at least one processor to: based on the at least one output from the machine learning model, select a route for controlling navigation of the autonomous vehicle.
[0063] At block 301, a determination is made whether an autonomous vehicle is in an automatic navigation mode. For example, it is desired that images not be displayed when the driver is manually navigating or otherwise controlling movement of the vehicle.
[0064] At block 303, in response to determining that the autonomous vehicle is in an automatic navigation mode, a windshield and/or other display device of the autonomous vehicle is controlled to provide a display for viewing by the driver and/or a passenger. For example, a state of the windshield may change from a transparent state to an opaque state. Also, a projector mounted in the autonomous vehicle they be activated to project images onto the windshield.
[0069] In one embodiment, the windshield turns into a “movie theater-like” screen or display. When the vehicle is in an auto-pilot mode, the controller 107 transforms the windshield into a display screen with image transformation that corrects the distortion in the shape of the windshield, such that the image appears to be presented on a flat surface according to the view point of each of one or more passengers in order to provide an improved viewing experience.
However, while the specification indicates (e.g., without sufficient detail) that the an action possibly related to presentation/display or a state of the windshield may be controlled based on a computer model or a vehicle mode, no algorithm(s) or steps/procedure are apparently described, in sufficient detail, for controlling presentation/display of the (transformed) image based on both the computer model output that is based on any or all identified “at least one feature associated with the vehicle” and any or all vehicle modes. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope of the claimed invention, but has only, if anything, described a desired result.
Regarding claim 39, applicant has not previously described, in sufficient detail, by what algorithm(s), or by what steps or procedure, he configured the controller to train the computer model using supervised or unsupervised learning. No configuration of the controller, or algorithm(s) or steps/procedure for training the computer model using supervised or unsupervised learning, is/are apparently described in the specification, in sufficient detail. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope of the claimed invention, but has, if anything, only described a desired result.
Regarding claim 40, applicant has not previously described, in sufficient detail, by what algorithm(s), or by what steps or procedure, the memory device “analyze[d] the sensor data”. No analyzing of the sensor data by the memory device is apparently described in the specification, in sufficient detail. Accordingly, the examiner believes applicant has not evidenced, to those skilled in the art, possession of the full scope of the claimed invention, but has, if anything, only described a desired result.
Claims 21 to 47 and 49 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claim 21, lines 5ff, in claim 34, lines 10ff, and in claim 45, lines 7ff, “conduct[ing], by the controller, pattern recognition of the data based on inputting the data into a [trained] computer model configured to identify at least one feature associated with the [autonomous] vehicle” is indefinite in its entirety from the teachings of the specification that neither describes nor clarifies what conducting “pattern recognition” of the sensor data “based on inputting the data into a [] computer model” might possibly mean, what the computer model being configured to “identify” a feature “associated with” the vehicle might possibly mean, or how the metes and bounds of any or all “feature[s]” that are somehow “associated with the [] vehicle” (e.g., perhaps in the same county as the vehicle, or owned by the same person as the vehicle?) could be determined with reasonable certainty5 from the teachings of the specification.
In claim 21, lines 8ff, in claim 34, lines 13ff, and in claim 45, lines 10ff, “generat[ing], by utilizing the trained computer model, an output based on the at least one feature” is indefinite in the claim context and from the teachings of the specification that does not teach that the model “generat[es]” any “output” based on any “feature associated with the [] vehicle”, and does not clarify, with reasonable certainty, what generating the output would mean, what the metes and bounds of the generated “output” might possibly be, or how the generating would be based on the (unclear) feature.
In claim 21, lines 13ff, in claim 34, line 16, and in claim 45, lines 14ff,” control[], based on the output and based on a mode determined for the [] vehicle, [presentation or display] of the [at least one first or generated] image” is indefinite in the claim context and from the teachings of the specification that apparently describes no presentation or display control which is based on both the claimed “output” and the claimed “mode”.
In claim 21, lines 13ff, in claim 34, line 16, and in claim 45, lines 14ff, “based on a mode determined for the [autonomous] vehicle” is indefinite from the teachings of the specification which does not clarify how any or all “mode[s]” “for the [] vehicle” would be defined with reasonable certainty (e.g., a mode for doing what particularly in the vehicle, a mode determined by whom or what particularly, etc.?), leaving the scope of the claim facially subjective (e.g., anything trivial that always exists might be called a “mode” for the vehicle), such that the phrase “based on . . . a mode” effectively means nothing in the claim, e.g., a mode for the vehicle might possibly be where electrical power is being supplied to the vehicle electronics, as opposed to a mode where the vehicle is not powered/off, and so everything electronic in the vehicle occurs based on that mode. See MPEP 2173.05(b), IV.
In claim 25, line 1, and in claim 30, line 1, “wherein the collected data is analyzed” is indefinite (e.g., analyzed in what way particularly?), with it also being unclear how the passive voice limitation (“is analyzed”) might fit into any step of the method, or limit the method, if it does.
In claim 27, lines 1ff, “perform facial recognition of the data” is unclear (e.g., how can a face “of” the collected data be recognized?)
In claim 32, line 1, in claim 33, line 1, and in claim 44, line 1, “the action” apparently has insufficient antecedent basis and is unclear.
In claim 33, lines 1ff, “wherein the action is performed based on analyzing the collected sensor data” is vague and indefinite (e.g., analyzing the data in what particular way, and particularly how is the performance of the action based on the analyzing?)
In claim 34, line 13, “the trained computer model” is indefinite and unclear, apparently having insufficient antecedent basis.
In claim 35, lines 155f, “the at least one sensor comprises a wearable computing device” is confusing and unclear (e.g., how can the partial “comprise[]” the whole?) See e.g., published paragraph [0046] of the specification for more correct language, e.g., “the at least one sensor comprises a sensor of a wearable computing device worn by a passenger of the autonomous vehicle”.
In claim 40, lines 1ff, “a memory device configured to analyze the sensor data” is unclear from the teachings of the specification (e.g., analyze in what way particularly, so as to not be facially subjective or to have indeterminate metes and bounds, with it also being unclear how the memory device might perform the action of analyzing, if it does?)
Claim(s) depending from claims expressly noted above are also rejected under 35 U.S.C. 112 by/for reason of their dependency from a noted claim that is rejected under 35 U.S.C. 112, for the reasons given.
Claim limitation “a controller” as introduced in the independent claims invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. It is apparently undescribed by what algorithm(s) the controller performs (or is configured to perform) i) the full conducting of pattern recognition step/function, ii) the full generating of the output step/function, and ii) the full controlling of presentation/display function based on both the output and the determined mode, so that equivalents may be determined in order to ascertain the metes and bounds of the claim limitation/term with reasonable certainty. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21 to 26, 30 to 47 and 49 are rejected under 35 U.S.C. 103 as being unpatentable over Szczerba et al. (2010/0253595) in view of Kim et al. (2019/0318492) and Aoki et al. (2002/0089756).
Szczerba et al. (‘595) reveals:
per claim 21, a method comprising:
collecting, by a controller [e.g., the processor, etc. of the enhanced vision system (EVS) system manager 110], data from at least one sensor [e.g., from the camera system 120, the radar system 125, etc.; see e.g., paragraph [0107], etc.] in an autonomous vehicle [e.g., the vehicle having semi-autonomous control (paragraph [0166]) with automatic steering, and adaptive cruise control (ACC) for automatically controlling vehicle speed];
conducting, by the controller, pattern recognition of the data based on inputting the data into a [e.g., as shown in FIGS. 17, 18, 38, etc. of , to (track and) identify/classify target objects on the HUD, such as a white-tail deer 274 bounding toward the road in FIG. 37 (e.g., paragraphs [0123], [0187], etc., and based on the “feature extraction” of FIG. 17 which is used to extract and classify “features or patterns in the data indicative of an object in the patch” (e.g., paragraph [0123]), etc.; see also paragraphs [0154], [0155], [0160], [0175], etc.];
generating[e.g., the object classification, track, update etc. in FIGS. 17, 18, 38, etc.] based on the at least one feature [e.g., based on the “feature extraction” in FIG. 17];
transforming image data to correct for distortion associated with a shape of a windshield of the autonomous vehicle, wherein the transformed image data corresponds to at least one first image [e.g., paragraphs [0180], [0181], “Projecting an image upon a curved and slanted windscreen creates a potential for irregularities in the resulting graphic images. . . . Another potential irregularity includes distortion in the graphical images created by geometric distortions due to non-flat display surfaces, perspective, and optical aberrations in large projection wide viewing angle system configurations. A two pass distortion correction scheme is disclosed to correct geometric distortion of laser vector projection displays by modeling the scan curves and projection screens with non-uniform-rational b-spline (NURB) parametric curves/patches. In the first pass, the desired NURBs in object space will be transformed to the viewing space defined by a viewpoint. The perspective is then mapped to a virtual display plane due to their affine and perspective invariants. Finally it is mapped to the non-flat display surface with parametric space mapping, if necessary. In the second pass, the path is transformed to a projection space that is defined by the position of the projector, and then the path is mapped to the projector plane. The non-linear distortions are corrected by calibration methods.”]; and
controlling, based on the output [e.g., the HUD image (e.g., in FIGS. 33, 34, 36, 37, etc.) obviously corrected for the geometric distortions due to the non-flat display surfaces, as taught in paragraph [0181]] and based on a mode determined for the autonomous vehicle [e.g., FIGS. 33, 34, 36, 37, paragraphs [0166], [0167], [0186], etc., “In a similar application, in vehicles utilizing semi-autonomous driving, wherein automatic vehicle lateral control is utilized through a lane keeping system coupled with an automatic steering mechanism, graphics upon the HUD can be utilized to inform the operator in advance that a lane change or other action is imminent, such that the operator is not surprised by the action subsequently taken by the semi-autonomous controls”[6]], presentation of the at least one first image on the windshield [e.g., as shown in FIGS. 33, 34, 36, 37, etc., and as described at paragraphs [0166], [0167], [0186], etc.];
It may be alleged that Szczerba et al. (‘595) does not reveal the trained computer model or the determined mode, although the examiner understands that the computerized elements for feature extraction, classification, prediction, etc. from sensor data (e.g., in FIG. 17), object tracking from sensor data, collision threat assessment, the model selector in FIG. 18, etc. would have all obviously been understood (by those skilled in the art) to be “computer model[s]”[7], as they output information that would reflect/simulate what features were present in/for the detected objects, how the features were classified, how they were predicted to move, etc. (e.g., paragraphs [0123], [0124], etc.), and he also teaches at paragraphs [0130], [0132], etc. that the camera data from the vision subsystem is monitored in order to generate estimates of the geometry of the lane of travel (e.g., 202, 204) or the vehicle, and that visually detected features are used to depict lane markers (775A, 775B in FIG. 20 and 202, 204 in FIG. 33) to be projected on the HUD (paragraphs [0164], [0185], etc.) as shown in FIG. 33 and to describe the lane geometry relative to the vehicle 760, e.g., to reduce driving complexity to the operator, e.g., even in conditions of heavy fog, and so that the driver will not be surprised e.g., by automatic steering (paragraph [0166]).
It may be alleged that Szczerba et al. (‘595) also does not reveal that the transformed image corresponds to at least one (first) image and that the transformed image is for display e.g., on the windshield, although he clearly suggests that the HUD projected images should be corrected for geometric distortions due to non-flat display surfaces, perspective, and optical aberrations in large projection wide viewing angle system configurations (paragraph [0181]).
However, in the context/field of improves estimating of lane information for a head-up display used with an autonomous driving system (ADS; paragraph [0029]) and an in-vehicle electronic device 100 including a camera, such as a wearable device (paragraph [0035]), Kim et al. (‘492) teaches that a (computer) learning model (e.g., trained at a server in FIG. 12 using images from the camera, and trained using supervised or unsupervised learning at paragraph [0151]) may be used to convert, through object recognition, an image 101 acquired by a camera (e.g., 1610) associated with/in the vehicle into a converted image satisfying the predetermined visibility reference value, as at paragraphs [0166], [0167], FIGS. 1, 3, 4, 6, 7, etc., such that the converted image is displayed on the head-up display of FIGS. 1, 3, 4, 6, 7, etc., for example, on the display 106 of the electronic device 100 (e.g., paragraphs [0033], etc. in FIG. 1, the display 601 in FIG. 6 (paragraphs [0088], etc.) or the display 1210 in FIG. 7 (paragraphs [0166], [0167], etc.)
Moreover, in the context/field of an improved head up display for use in a vehicle, Aoki et al. (‘756) teaches that an image to be seen/displayed/projected (FIG. 5), as input at the signal input 9, may be converted/distorted in an image distortion generating means having a replaceable ROM (e.g., according to an inverse image shown in FIG. 7 for the windshield of a particular type of vehicle, driver physique, etc.), prior to being output at 21 for display, so that distortion (as shown in FIG. 6) otherwise arising in a virtual image as seen from a driver eye point due to non-flatness of the windshield will be canceled out (claim 1), so that the image to be seen/displayed/projected (FIG. 5) is seen without the distortion due to the non-flatness of the windshield.
It would have been obvious before the effective filing date of the claimed invention to implement or modify the Szczerba et al. (‘595) system for virtual control and displays by laser projection so that, for estimating lane (marker position) information as desired by Szczerba et al. (‘595), the vehicle would have been provided with a (trained computer) learning model (e.g., 1310, 1320, etc.; being trained e.g., at a server in FIG. 12 of Kim et al. (‘492), and being trained using supervised or unsupervised learning at paragraph [0151]) in order to convert, through object recognition, image(s) acquired by camera(s) in the vehicle, such as included in a wearable device in the vehicle, into converted image(s) satisfying the predetermined visibility reference value, as taught by Kim et al. (‘492), and so that the vehicle would have displayed, while the vehicle was driving (as a mode) with an autonomous driving system (ADS) as taught by Kim et al. (‘492) and in fog as taught by Szczerba et al. (‘595), the model converted images as taught by Kim et al. (‘492), for example as the projected lane indicators 222, 224 in FIG. 33 of Szczerba et al. (‘595) that were to be based on sensor information, in order that the operator would have been assisted in noticing the general position of the roadway, with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
It would have been obvious before the effective filing date of the claimed invention to implement or further modify the Szczerba et al. (‘595) system for virtual control and displays by laser projection so that the converted image satisfying the predetermined visibility reference value would have been displayed on the HUD, after (also) converting/distorting/transforming the image in the manner taught by Aoki et al. (‘756) so that distortion otherwise arising in a virtual image as seen from a driver eye point due to non-flatness of the windshield would have been canceled out, as taught by Aoki et al. (‘756), in order that the image to be seen/displayed/projected for the driver would have been seen without the distortion due to the non-flatness of the windshield, , with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
As such, the implemented or further modified Szczerba et al. (‘595) system for virtual control and displays by laser projection would have rendered obvious:
per claim 21, a method comprising:
collecting, by a controller [e.g., in Szczerba et al. (‘595), the processor, etc. of the enhanced vision system (EVS) system manager 110], data from at least one sensor [e.g., in Szczerba et al. (‘595), from the camera system 120, the radar system 125, etc.; and in Kim et al. (‘492) data from a camera (1610) of an electronic device 100 installed in the vehicle] in an autonomous vehicle [e.g., in Szczerba et al. (‘595), the vehicle having semi-autonomous control (paragraph [0166]) with automatic steering, and adaptive cruise control (ACC) for automatically controlling vehicle speed; and the vehicle with the autonomous driving system (ADS) in Kim et al. (‘492)];
conducting, by the controller, pattern recognition [e.g., object recognition of the data in paragraphs [0030], [0041], etc. in Kim et al. (‘490); and/or the extraction and classification of “features or patterns in the data indicative of an object in the patch” (e.g., paragraph [0123]), etc. in Szczerba et al. (‘595); see also paragraph [0160] in Szczerba et al. (‘595), “In the above exemplary determinations, contextual information to the target tracking data can be achieved by a number of methods, including but not limited to correlating relative motion of the target to the host vehicle's speed, GPS data including map data describing lane geometry for the vehicle's present location, lane geometry described by visual or camera data, and/or pattern recognition programming analyzing images of the tracked object sufficient to discern between an on-coming vehicle and a signpost”; see also paragraph [0154] in Szczerba et al. (‘595), “The aforementioned methods describe the use of vision or camera systems. Analysis of such information can be performed by methods known in the art. Image recognition frequently includes programming to look for changes in contrast or color in an image indicating vertical lines, edges, corners or other patterns indicative of an object”; see also paragraphs [0155], [0175], etc.] based on inputting the data into a trained computer model [e.g., in Kim et al. (‘492), the learning model (321) trained e.g., as in FIG. 10 and at the server in FIG. 12 using images from the camera and used e.g., by the data recognizer 1320 and/or the data learner 1310] configured to identify at least one feature [e.g., lane/object information in Kim et al. (‘492); and/or the white-tail deer 274 in Szczerba et al. (‘595)] associated with the autonomous vehicle [e.g., by using the (computer) learning model for outputting (e.g., as shown in FIG. 3) the conv