Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/05/2024 is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “ a captured data acquisition unit configured to acquire video data”, “an operation controller configured to receive voice operation,” “a detection unit configured to detect a speech state,” and “a recording controller configured to: record the video data … generate event data … and store the generated event data” in claim 1.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, and 5-7 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 7
Step 1 – YES
Claim 7 discloses a method, and thus falls in one of the statutory categories.
Step 2A, Prong One – YES
Claim 7 recites an abstract idea Claim 7 recites “a step of detecting a speech state of the voice command,” “generating event data while changing, based on the speech state of the voice command when the voice operation for event recording is received, a period of the event data that is extracted from the video data.” Detecting a speech state of a voice command may be performed by the mind using observation and judgment. That is, one of ordinary skill in the art would be able to determine a volume, speed, or other “state” of speech using observation and judgment. Generating event data, under the broadest reasonable interpretation, is generating any data regarding an event, for example the occurrence, duration, or type of event. This may be performed in the mind using observation and judgment, with the assistance of a physical aid, e.g. notating “event occurred” on pen and paper. Changing a period of the event data that is extracted from the video data includes merely selecting a frame or frames of video data. The claim does not require the changing be automatic or otherwise computer-implemented other than generically linking the invention to the field of vehicle video data recording, and thus one of ordinary skill in the art would be able to change a period of event data.
Step 2A, Prong Two – NO
Claim 7 does not recite additional elements that integrate the judicial exception into a practical application. Claim 7 recites “a step of acquiring video data captured by a camera that captures an image of surroundings of a vehicle; a step of receiving voice operation based on a voice command instructing event recording” and “a step of recording the video data.” Acquiring video data and receiving voice operation based on a voice command are mere data gathering and thus do not integrate the judicial exception into a practical application. Recording video data is mere data storage and thus also does not integrate the judicial exception into a practical application. As stated, the additional elements only generally link the invention to the field of on-vehicle video data recording and do not meaningfully integrate the judicial exception into a practical application by any step of automation or outputting.
Step 2B – NO
Claim 7 does not recite additional elements that amount to significantly more than the judicial exception. Claim 7 recites “a step of acquiring video data captured by a camera that captures an image of surroundings of a vehicle; a step of receiving voice operation based on a voice command instructing event recording” and “a step of recording the video data.” Acquiring, receiving, and recording steps are mere extra-solution activity as described above and thus do not amount to significantly more than the abstract idea.
Thus, Claim 7 is not eligible subject matter.
Claim 1 is an apparatus claim with elements corresponding to the steps of Claim 7. Claim 1 contains the additional elements of “An on-vehicle recording control apparatus,” “ a captured data acquisition unit configured to acquire video data”, “an operation controller configured to receive voice operation,” “a detection unit configured to detect a speech state,” and “a recording controller configured to: record the video data … generate event data … and store the generated event data.” The units, as best interpreted in light of the interpretation under 112(f) above, are generic computer components and do not amount to significantly more than the judicial exception. An “on-vehicle recording control apparatus” merely connects the abstract idea to a field of endeavor and does not integrate the abstract ideas of detecting speech, generating event data, and changing a period of event data.
Claim 2 recites “wherein the detection unit is further configured to detect that, as the speech state, a duration of the voice command of the voice operation received by the operation controller is shorter than a voice command standard duration that is set in advance, and the recording controller is further configured to increase, when it is detected that the speech state of the voice command is shorter than the voice command standard duration, the period of the event data extracted from the video data.” Detecting a duration of the voice command being shorter than a “standard duration” may be performed in the mind by one of ordinary skill in the art by judgment and observation. Increasing a period of event data extracted may also be performed in the mind, as selecting a frame or frames does not require the selecting be automatic or otherwise computer-implemented other than generically linking the invention to the field of video data recording, and thus one of ordinary skill in the art would be able to change a period of event data. Thus, Claim 2 contains an abstract idea and does not contain additional elements to integrate the abstract idea into a practical application.
Claim 5 recites “wherein the detection unit is further configured to detect, as the speech state, that the voice command is spoken by multiple speakers within a predetermined period, and the recording controller is further configured to increase, when it is detected, as the speech state of the voice command, that the voice command is spoken by multiple speakers within the predetermined period, the period of the event data extracted from the video data.” Detecting multiple speakers may be performed in the mind by one of ordinary skill in the art by judgment and observation. Increasing a period of event data extracted may also be performed in the mind, as selecting a frame or frames does not require the selecting be automatic or otherwise computer-implemented other than generically linking the invention to the field of video data recording, and thus one of ordinary skill in the art would be able to change a period of event data. Thus, Claim 5 contains an abstract idea and does not contain additional elements to integrate the abstract idea into a practical application.
Claim 6 recites the additional element “wherein the recording controller is further configured to increase, as the period of the event data extracted from the video data, a retroactive period from a time point at which the voice operation is received.” This recording controller is a generic computer component and thus does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea; recording video before a certain time period does not integrate the abstract idea into a practical application and is common in the field of video recording.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Pan et al. (CN 105551109 A) in view of Julian et al. (US 2021/0370879 A1).
Regarding Claim 1, Pan teaches “An on-vehicle recording control apparatus comprising:
a captured data acquisition unit configured to acquire video data (Pan, paragraph 21 discloses “Image acquisition module for image data collection and collection”; where an image acquisition module is a capture data acquisition unit. Pan, paragraph 12 discloses “An audio processing module, which is connected to the central processing group and is responsible for synchronous audio collection and codec processing of the recorded video”; where a recorded video is video data. Pan, paragraph 7, recites the invention is directed to a driving application: “The technical problem to be solved by the present invention is to provide a driving recorder based on voice recognition, which can control the image required for real-time recording without touching the buttons on the driving recorder to realize wireless control”; thus, Pan teaches recording an image of surroundings of a vehicle);
“an operation controller configured to receive voice operation based on a voice command instructing event recording” (Pan, paragraph 12 recites “An audio processing module, which is connected to the central processing group and is responsible for synchronous audio collection and codec processing of the recorded video”; where a central processing group is an operation controller);
“a detection unit configured to detect a speech state of the voice command of the voice operation received by the operation controller” (Pan, paragraph 15 recites “a voice recognition module coupled to the central processing group for collecting and identifying voice data”; where a voice recognition module is a detection unit; where voice data is a speech state of a voice command; under the broadest reasonable interpretation, a “speech state of the voice command” may be any attribute of the voice, including voice recognition and identification; thus, Pan teaches detecting a speech state, where a speech state is identified voice data); “and
“a recording controller” (Pan, paragraph 13 discloses “a data storage module, which is connected to the central processing group for real-time recording of data storage”; where a data storage module is a recording controller) configured to:
record the video data acquired by the captured data acquisition unit” (Pan, paragraph 14 discloses “a data storage module, which is connected to the central processing group for real-time recording of data storage”; where storing data including voice and image data is recording video data);
“generate event data while changing, based on the speech state of the voice command detected by the detection unit when the operation controller receives the voice operation for event recording, a period of the event data that is extracted from the video data” (Pan, paragraph 16-17 discloses “Preferably, the data storage module comprises: The LCD image display playback module is used for displaying the playback module to be responsible for the real-time display of the image and the display function of the playback image”; where displaying playback image is generating event data. Pan, paragraph 27 recites “The invention has the beneficial effects that the driving recorder of the invention realizes the image required for real-time recording through voice control”; where performing real-time recording is changing a period of event data extracted; that is, under the broadest reasonable interpretation, changing a period of event data may be initiating a period of event data recording, as the period of event data extracted changes from no data recorded to current or subsequent data being recorded; Pan teaches initiating “real-time recording through voice control” and thus teaches changing a period of event data that is extracted); “and
store the generated event data” (Pan, paragraph 18 discloses “a data storage management module for providing data storage management and performing management of data local storage and expansion card storage”).
Although Pan recites “processing of the recorded video”, Pan does not explicitly teach “video data captured by a camera” (emphasis added).
However, in an analogous field of endeavor, Julian teaches “video data captured by a camera” (Julian, [0107] discloses “For example, FIG. 3B illustrates an example in which a driver's vehicle 310 is equipped with three outward facing cameras: a left camera with a left FOV 364 extending to the left, a right camera with a right FOV 366 extending to the right, and a front camera with a front FOV 362 extending forward.” Julian, [0108] also discloses “In some cases, an IDMS may be configured to only collect video data from the outside of the vehicle.”)
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to have modified Pan to incorporate the teachings of Julian by using a camera to collect video outside of a vehicle. One of ordinary skill in the art would be motivated to combine the Pan and Julian references based on a teaching in Pan of a “driving recorder” collecting “recorded video.” That is, Pan does not explicitly teach a means (a camera, sensor, or other collection device) to collect required recorded video for the invention of Pan, but in light of the teachings of Julian, it would be obvious to one of ordinary skill in the art that external cameras may be implemented to obtain video data. Accordingly, the combination of Pan and Julian discloses the invention of Claim 1.
Regarding Claim 7, Claim 7 recites a method with steps corresponding to the elements of the system recited in Claim 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Pan and Julian references, presented in rejection of Claim 1, apply to this claim.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Pan et al. (CN 105551109 A) in view of Julian et al. (US 2021/0370879 A1), further in view of Palmer et al. (US 2016/0292936 A1).
Regarding Claim 6, the combination of Pan and Julian does not explicitly teach “The on-vehicle recording control apparatus according to claim 1, wherein the recording controller is further configured to increase, as the period of the event data extracted from the video data, a retroactive period from a time point at which the voice operation is received.”
However, in an analogous field of endeavor, Palmer teaches “The on-vehicle recording control apparatus according to claim 1, wherein the recording controller is further configured to increase, as the period of the event data extracted from the video data, a retroactive period from a time point at which the voice operation is received” (Palmer, [0025] discloses “Video system 16 may be configured such that the video information includes video information for periods of time that last from before and/or about the individual start times of the detected vehicle events until about and/or after the individual end times of the detected vehicle events”; where periods of time from before the start time of a vehicle event is a retroactive period; where a time point at which a voice operation is received is a start time of a detected vehicle event).
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to have modified the combination of Pan and Julian to incorporate the teachings of Palmer by including video information for a period of time lasting from before and after a vehicle event. The detected vehicle event of Palmer may be simply substituted for the voice data of Pan. That is, Pan teaches recording video data outside a vehicle after a voice data input, and Palmer teaches detecting vehicle events and then obtaining video information both before and after the event; substituting the initiating event of Palmer for the voice input of Pan would be obvious to one of ordinary skill in the art and provides the benefit of initiating an event without touching, thus improving safety. Pan, paragraph 7 discloses “The technical problem to be solved by the present invention is to provide a driving recorder based on voice recognition, which can control the image required for real-time recording without touching the buttons on the driving recorder to realize wireless control. This operation mode is safe. Convenient.” Accordingly, the combination of Pan, Julian, and Palmer discloses the invention of Claim 6.
Allowable Subject Matter
Claims 2 and 5, which have been rejected above under 35 U.S.C. 101, are not rejected over prior art references, are objected to as being dependent upon a rejected base claim, but would be allowable if: (a) rewritten in independent form including all of the limitations of the base claims and any intervening claims; and (b) the above-described rejection of these claims under 35 U.S.C. 101 is overcome.
Claims 3 and 4 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding Claim 2, although Pan teaches speech recognition, Pan and Julian do not explicitly determining a duration of a voice command and thus changing the period of event dat. Yoda (US 2002/0035475 A1)) discloses determining speech duration: Yoda, [0041] discloses “In accordance with the voice acquisition command signal, the voice acquisition circuit 321 starts acquiring the voice signal supplied from the microphone 31 and sends the voice signal to the utterance duration detector 323 and the voice recognition processor 324.” And “During this period, the utterance duration detector 344 detects the duration of utterance made by the user on the basis of the mouth image data MI supplied from the mouth-part extractor 343.” However, Yoda uses both microphone and image data to determine utterance duration and thus recognize a voice, and does not disclose altering or changing any extracted video data. There is no motivation in the prior art to teach Yoda in combination with Pan and Julian to teach the invention of Claim 2.
Thus, none of the previously cited prior art provides a motivation to teach “The on-vehicle recording control apparatus according to claim 1, wherein the detection unit is further configured to detect that, as the speech state, a duration of the voice command of the voice operation received by the operation controller is shorter than a voice command standard duration that is set in advance, and the recording controller is further configured to increase, when it is detected that the speech state of the voice command is shorter than the voice command standard duration, the period of the event data extracted from the video data.”
Regarding Claim 3, although Pan teaches speech recognition, Pan and Julian do not explicitly determining a recognition rate and thus increasing the period of event data in response to a rate in a low range. Otani (JP 2008003371 A) discloses on-vehicle voice recognition, and discloses a voice recognition rate: paragraph page 3, paragraph 3 discloses “The present invention was created in view of the problems in the prior art, and improves the recognition rate for a registered voice command when the voice command is registered in a recognition dictionary and used to control an in-vehicle device such as a navigation device.” However, Otani does not explicitly teach changing a period of extracted video data based on a recognition rate. There is no motivation in the prior art to teach Otani in combination with Pan and Julian to teach the invention of Claim 3.
Thus, none of the previously cited references, alone or in combination, provide a motivation to teach the ordered combination of “The on-vehicle recording control apparatus according to claim 1, wherein the operation controller is further configured to receive the voice operation when a recognition rate of the voice command is equal to or larger than a predetermined threshold, the detection unit is further configured to detect, as the speech state, that the recognition rate of the voice command is low in a range of the recognition rate in which the voice operation is received, and the recording controller is further configured to increase, when it is detected that the recognition rate is low as the speech state of the voice command, the period of the event data extracted from the video data.”
Regarding Claim 4, Pan and Julian do not explicitly teach determining a speech volume. Bae et al. (KR 20150065643 A) discloses determining a voice volume and comparing a volume to a predetermined volume: Bae, page 11, paragraph 4 discloses “When the voice recognition start command is input, the video apparatus 1 determines whether a voice having a volume equal to or higher than a predetermined volume is inputted through the first audio input unit 112 or the second audio input unit 312.” However, Bae does not explicitly teach increasing a period of event data in response to a volume being higher than a threshold. There is no motivation in the prior art to teach Bae in combination with Pan and Julian to teach the invention of Claim 4.
Thus, none of the previously cited references, alone or in combination, provide a motivation to teach the ordered combination of “The on-vehicle recording control apparatus according to claim 1, wherein the detection unit is further configured to detect, as the speech state, a speech volume of the voice command when the voice operation is received, and the recording controller is further configured to increase, when it is detected, as the speech state of the voice command, that the speech volume of the voice command is larger than a predetermined value, the period of event data extracted from the video data.”
Regarding Claim 5, Pan and Julian do not explicitly teach detecting multiple speakers. Enbom (US 2013/0201272 A1) discloses determining multiple speakers in using video data: Enbom, [0060] discloses “If during the initialization period, it is determined that there are multiple faces detected in the video (or if it is determined that a single face is not detected), the gain controller sets the system to a multiple speaker mode (e.g., based on receiving the AGC-T value that corresponds to a multiple speaker mode value).” However, Enbom does not explicitly teach increasing a period of event data extracted in response to the multiple speaker determination. There is no motivation in the prior art to teach Enbom in combination with Pan and Julian to teach the invention of Claim 5.
Thus, none of the previously cited references, alone or in combination, provide a motivation to teach the ordered combination of “The on-vehicle recording control apparatus according to claim 1, wherein the detection unit is further configured to detect, as the speech state, that the voice command is spoken by multiple speakers within a predetermined period, and the recording controller is further configured to increase, when it is detected, as the speech state of the voice command, that the voice command is spoken by multiple speakers within the predetermined period, the period of the event data extracted from the video data.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Penilla et al. (US 2017/0200449 A1) teaches using voice and image data to determine the mood a driver of a vehicle and thus make an adjustment to a setting of a vehicle; see Fig. 27.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE TABANCAY DUFFY whose telephone number is (703)756-1859. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at 5712723382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLINE TABANCAY DUFFY/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662