Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 7, 13, 19 and 20 are amended.
Claims 1-20 are pending.
Priority
This Application is a continuation of application 16963375, which claimed priority to JP2018010903 filing date -1/17/2018. This application is being examined using the filing date indicated on the JP priority document.
Response to Arguments/Remarks
Applicant’s arguments with respect to claims 1 – 20 have been considered but are moot in view of the new ground(s) of rejection as necessitated by applicant's amendments
For Clarity:
Applicant argues:
“To establish a prima facie case of obviousness under § 103, the Examiner must show that the prior art references, when combined, teach or suggest all of the claim limitations (MPEP §2143). Applicant respectfully submits independent claim 1 is patentable over the cited references because the cited references, whether considered alone or in combination, do not teach or suggest "in a case that the identification information of the driver is not automatically determined, transmit the captured image of the driver to a terminal device to enable the terminal device to display the captured image, acquire user input related to the driver, the user input being provided in response to the terminal device displaying the captured image of the driver, and determine the identification information of the driver from among identification information of drivers registered in advance based on the user input."
Examiner respectfully disagrees. The applicant is arguing the amendments, thus a new rejection to overcome the amendments is below.
Applicant refers to the amendments in this argument. Further clarity is in the revised 103 below. Examiner would like to point out that; factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Examiner believes that obviousness rejection has been achieved in the 103 rejection below.
Also Note that under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. The words of the claim must be given their plain meaning unless the plain meaning is inconsistent with the specification. 2111.01 (I). See also In re Marosi, 710 F.2d 799, 802, 218 USPQ 289, 292 (Fed. Cir. 1983) ("'[C]laims are not to be read in a vacuum, and limitations therein are to be interpreted in light of the specification in giving them their ‘broadest reasonable interpretation.'"2111.01 (II)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 7-9, 13-15, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Toda et al. [US 20060072792, now Toda], in view of Choi [US 20120203599, now Choi, in view of Nemat-Nasser et al. [US 20130073114, now Nemat], further in view of Lacey et al. [US20170140174, now Lacey].
Claim 1
Toda discloses an apparatus comprising: a memory configured to store instructions; and at least one processor configured to execute the instructions [see at least Toda, 0011 (“according to another aspect of the present invention, a driver monitoring system for a vehicle includes an image capturing means for capturing an image… controlling means for performing a face identification determination, and an inattentive and drowsy driving determination of a driver based on an image captured by the image capturing means.”); 0020 (“The face identification determining portion 7 stores in a memory … determining whether or not the driver seated on the driver's seat has been registered… A registration of a driver is conducted by capturing an image of the driver's face by the image capturing device.”)] to:
acquire a capture image of a driver of the vehicle [see at least Toda, abstract, Fig. 1, ¶ 0010];
acquire driving-state data including a captured image of a driver [see at least Toda, abstract, Fig. 1, ¶ 0010 (“image capturing means for capturing an image… performing a face identification determination”)];
automatically determine identification information of the driver from among identification information of drivers registered in advance based on information of the driver included in the captured image; [see at least Toda, abstract, Fig. 1, ¶ 0010 (“The controlling means determines whether or not a driver on the driver's seat is a registered driver on the basis of an image captured by the image capturing means when a shifting means is in a parking position”)];
Toda does not specifically disclose but Choi does teach some of the aspects of the limitation transmit the captured image of the driver to a terminal device to enable the terminal device to display the captured image, acquire user input related to the driver, the user input being provided in response to the terminal device displaying the captured image of the drive [see at least Choi, abstract (“A management server authenticates taxi drivers”); Fig. 5; ¶ 0016 (“ authenticating a driver of the taxi based on the driver mobile terminal”)]; 0082 (“then the authentication results included in the received authentication result message indicate a failed authentication, the driver terminal 100 displays a re-authentication screen for the driver in step 1319.”)]; 0082 (“then the authentication results included in the received authentication result message indicate a failed authentication, the driver terminal 100 displays a re-authentication screen for the driver in step 1319.”)]; and
Choi further teaches record the driving-state data with the identification information of the driver [see at least Choi, ¶ 0048 (“The memory 140 stores processing and control programs for the first controller 110, reference data, various updatable archival data, phone numbers, etc., and serves as a working memory of the controller 110. Additionally, the memory 140 may store program data used to provide various functions to the mobile terminal. Further, the memory 140 may store various information received through the wireless communication unit 120 or the first short-range communication module 130.”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Nemat more specifically teaches automatically determine identification information of the driver from among identification information of drivers registered in advance based on information of the driver included in the captured image [see at least Nemat, abstract ("detect a face of a driver in the image, determine a set of face data from the image, and identify the driver based at least in part on the set of face data"), ¶ 0017 (“these sensor functions are performed automatically by the sensor or are carried out in response to commands (e.g., issued by the onboard computer 104)”); 0020 ("…The captured images may be used to identify the driver, to record driver behavior regarding circumstances leading up to, during, and immediately after a driving event")];
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi, further with the more specific identification and authentication of the driver using facial features of Nemat. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Toda, Choi and Nemat disclose/teach using an image to identify a driver but Lacey more specifically teaches transmit the captured image of the driver to a terminal device to enable the terminal device to display the captured image, acquire user input related to the driver, the user input being provided in response to the terminal device displaying the captured image of the driver [see at least Lacey, Abstract (“The method further includes generating, in a system agnostic widget, a consent request for requesting authorization to release the personal information associated with the user to the third party and transmitting the consent request to a client device of the user via the widget.”); ¶ 0030 (“the client application 112 facilitates transmission of PII to other devices, such as the hub server 104 and/or requesting devices 108-n (e.g., a third party)”); 0031 (“displaying on its display”); 0053 (“displayed on the client device”); 0072 (“includes a display and input devices”); 0077 - 0095 (“… a user interface module 224 that receives commands and/or inputs from a user via the user interface 206 (e.g., from the input device(s) 210, which may include keyboard(s), touch screen(s), microphone(s), pointing device(s), and the like), and provides user interface objects on a display (e.g., the display 208); [0081] an image capture device module 226 (including, for example, applications, drivers, etc.) that works in conjunction with the image capture device 214 to capture images…”) thus containing all the units and ability to fulfill the actions of this limitation.].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi, with the more specific identification and authentication of the driver using facial features of Nemat, further with the ability to transmit and display images for identification/authorization of Lacey. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claim 2
Toda, Choi, and Nemat disclose/teach the apparatus of Claim 1.
Toda does not specifically teach but Choi teaches wherein the at least one processor is configured to execute the instructions to acquire driving-state data including acceleration information of the vehicle, and record the driving-state data including acceleration information of the vehicle in association with the identification information of the driver [see at least Choi, Fig. 5; ¶ 0082 (“then the authentication results included in the received authentication result message indicate a failed authentication, the driver terminal 100 displays a re-authentication screen for the driver in step 1319.”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Neither Toda or Choi specifically disclose/teach but Nemat does teach including acceleration information of the vehicle, and record the driving-state data including acceleration information of the vehicle in association with the identification information of the driver [see at least Nemat, ¶ 0020 (“The captured video images may be processed using one or more processors to determine whether the vehicle has departed from its proper lane and by how much. One or more accelerometers 310 may be placed onboard the vehicle to monitor acceleration along one or more vehicle axis… In various embodiments, the face data are associated (e.g., by attaching the same header) to driving data captured around or at the time the image from which the face data are derived is captured.”); 0035].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi, with the more specific identification and authentication of the driver using facial features and acceleration findings of Nemat. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claim 3
Toda, Choi, and Nemat disclose/teach the apparatus of Claim 1.
Neither Toda or Choi discloses/teaches but Nemat teaches wherein the at least one processor is configured to execute the instructions to calculate a degree of coincidence between facial feature information included in the captured image of the driver and facial feature information of drivers registered in advance, and determine the identification information of the driver as identification information of a driver registered in advance associated with facial feature information having the degree of coincidence equal to or above a predetermined threshold [see at least Nemat, abstract ("detect a face of a driver in the image, determine a set of face data from the image, and identify the driver based at least in part on the set of face data"), ¶ 0021 ("The video cameras and/or the still cameras… are capable of capturing 3-D images. The captured images may be used to identify the driver, to record driver behavior regarding circumstances leading up to, during, and immediately after a driving event"); 0026 (“The quality of the face score is deemed acceptable if it is above a predefined threshold value or unacceptable…”}; 0030 (“In various embodiments, using the face data to identify the driver includes transmitting the face data to a remote server via for example a wireless communications link for driver identification, authentication, and/or registration...”); thus if can be transmitted to a main server, can be transmitted to the user terminal.] .
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi, with the more specific identification and authentication of the driver using facial features of Nemat. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claim 7
Claim 7 has similar limitations to claim 1, therefore claim 7 is rejected with the same rationale as Claim 1.
Note Claim 7 has all the same limitations as Claim 1 after the preamble.
Claim 8
Claim 8 has similar limitations to claim 2, therefore claim 8 is rejected with the same rationale as Claim 2.
Claim 9
Claim 9 has similar limitations to claim 3, therefore claim 9 is rejected with the same rationale as Claim 3.
Claim 13
Claim 13 has similar limitations to claim 1, therefore claim 13 is rejected with the same rationale as Claim 1.
Note Claim 13 has the same limitations as Claim 1, the preamble which does not have patentable weight has different language.
Claim 14
Claim 14 has similar limitations to claim 2, therefore claim 14 is rejected with the same rationale as Claim 2.
Claim 15
Claim 15 has similar limitations to claim 3, therefore claim 15 is rejected with the same rationale as Claim 3.
Claim 19
Claim 19 has similar limitations to claim 1, therefore claim 19 is rejected with the same rationale as Claim 1.
Note Claim 19 has all the same limitations as Claim 1, the preamble which does not have patentable weight has different language.
Claim 20
Claim 20 has similar limitations to claim 1, therefore claim 20 is rejected with the same rationale as Claim 1.
Note Claim 20 has all the same limitations as Claim 1, the preamble which does not have patentable weight has different language.
Claims 4 – 6, 10-12 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Toda et al. [US 20060072792 Al, now Toda], in view of Choi [US 20120203599, now Choi, further in view of Nemat-Nasser et al. [US 20130073114, now Nemat], in view of Lacey et al. [US20170140174, now Lacey], further in view of Onishi [JP2016066241, now Onishi].
Claim 4
Toda, Choi, Nemat and Lacey disclose/teach the apparatus of Claim 1.
Toda does not disclose by Choi teaches identification information of the driver is not registered in advance [see at least Choi, Fig. 5; ¶ 0042; 0082 (“then the authentication results included in the received authentication result message…the driver terminal 100 displays a re-authentication screen for the driver in step 1319.”)];
Neither Toda, Nemat specifically disclose/teach but Onishi teaches wherein the at least one processor is configured to execute the instructions to, in a case that the user input related to the driver to the terminal device indicates that identification information of the driver is not registered in advance, record the driving-state data in association with new identification information [see at least Onishi, ¶ 0002 (“processing apparatus”); 0040 (“it is determined that new authentication is necessary. The face data of the user C is temporarily stored in the collation face data storage unit 125,”); 0067 ("the camera control unit 122 assigns temporary IDs to face images that need to be newly authenticated, temporarily stores the face images in the collation face image storage unit 125, and advances the process to S912."), 0068 (“the camera control unit 122 generates an interrupt via the bus 112 in order to notify the control unit 107 of update information about the contents of the face stored in the collation face storage unit 125. Specifically, the update information includes ID information of face data deleted from the collation face data storage unit 125, face data requiring new authentication, and a temporary ID.”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi . further with the temporal identification and overall use of stored identification to authenticate a user, taught in Onishi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claim 5
Toda, Choi, Nemat and Lacey disclose/teach the apparatus of Claim 1.
Toda does not disclose by Choi teaches wherein the at least one processor is configured to execute the instructions to, in a case that the identification information of the driver is not automatically determined, generate temporal identification information for the driver and record the driving-state data in association with the temporal identification information [see at least Choi, Fig. 5; ¶ 0048 (“The memory 140 stores processing and control programs… serves as a working memory of the controller 110…”)]; 0082 (“then the authentication results included in the received authentication result message…
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claim 6
Toda, Choi, Nemat and Lacey disclose/teach the apparatus of Claim 4.
Toda does not disclose by Choi teaches the at least one processor is configured to execute the instructions to, in a case that the identification information of the driver is determined based on the user input, record the driving-state data in association with the determined identification information in place of the temporal identification information [see at least Choi, ¶ 0048].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Toda, Choi, Nemat and Lacey teach the concepts in the claims and all the limitations of Claim 1 and Claim 4, but Onishi more specifically teaches the at least one processor is configured to execute the instructions to, in a case that the identification information of the driver is determined based on the user input, record the driving-state data in association with the determined identification information in place of the temporal identification information [[see at least Onishi, ¶ 0038 (“is a diagram for explaining an outline of a camera image obtained by viewing, through the camera unit 127, a state in which the user C further approaches the MFP201 and enters the camera recognition area 206 from the state shown in FIG. 4.”) 0040 (“it is determined that new authentication is necessary. The face data of the user C is temporarily stored in the collation face data storage unit 125,”); 0067 ("the camera control unit 122 assigns temporary IDs to face images…"), 0068 (“…notify the control unit 107 of update information … the update information includes ID information of face data deleted from the collation face data storage unit 125, face data requiring new authentication, and a temporary ID.”)].
Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine, with a reasonable expectation of success, the driver monitoring system of Toda [Toda, abstract] with the more specific identification and storage of information of Choi . further with the temporal identification and overall use of stored identification to authenticate a user, taught in Onishi. This allows for a more efficient and efficient technique to identify a driver and provide more specific information, including temporary verification of the driver, which allows for a more robust process to identify and use vehicle information.
Claims 10 and 16
Claims 10 and 16 have similar limitations to claim 4, therefore claim 10 and 16 are rejected with the same rationale as Claim 4.
Claims 11 and 17
Claims 11 and 17 have similar limitations to claim 5, therefore claim 11 and 17 are rejected with the same rationale as Claim 5.
Claims 12 and 18
Claims 12 and 18 have similar limitations to claim 6, therefore claim 12 and 18 are rejected with the same rationale as Claim 6.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOAN T GOODBODY whose telephone number is (571) 270-7952. The examiner can normally be reached on M-TH 7-3 (US Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.html.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RACHID BENDIDI can be reached at (571) 272-4896. The Fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspot.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from the USPTO Customer Serie Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/JOAN T GOODBODY/
Primary Examiner, Art Unit 3664
(571) 270-7952