DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of group II (claims 9-14) in the reply filed on 10/29/2025 is acknowledged.
Applicant amendment to the claims to cancel the non-elected claims of groups I and II (claims 1-8 and 15-20), and to add new claims 21-34 is entered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 9-10, 13, 21-22, 27-30, and 32 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wu (US 20160055371).
Regarding claim 9:
Wu discloses: a mobile apparatus comprising: a processor and a non-transitory computer-readable medium (FIG. 1, ¶ [0017] “An embodiment of the invention provides smart glasses including an image capturing unit, a storage unit, a display unit, and a processing unit.” A mobile apparatus includes “smart glasses”); comprising instructions that, when executed by the processor, cause the mobile apparatus to:
provide a first faceprint from a library of faceprints to a smart glasses device (¶ [0017] “…The storage unit is configured to store a database, and the database records a plurality of profile information and business card information corresponding to each of the profile information.”; ¶ [0011] “…Then, facial features of each of the recognized faces are compared with those of the profile information in the database to find the profile information matching the facial features”; in order to perform a comparison between facial features and other facial features stored in the database, the facial feature in the database must be retrieved, FIG. 2, step S206 and ¶ 0042]. Nothing in the claim precludes the smart glasses and the mobile apparatus to be the same device);
cause the smart glasses device to perform targeted facial recognition for the first faceprint (¶ [0017] “…The processing unit is coupled to the image capturing unit, the storage unit, and the display unit, and is configured to recognize at least one face appearing in the image captured by the image capturing unit, and compare the facial features of each of the recognized faces with those of the profile information in the database to find the profile information matching the facial features. In particular, if the profile information matching the facial features is found” in order to perform the comparison, facial recognition on the profile information must be performed; FIG.2, step S204 and ¶ [0041]);
obtain a second faceprint from the smart glasses device (¶ [0017] “…The processing unit is coupled to the image capturing unit, the storage unit, and the display unit, and is configured to recognize at least one face appearing in the image captured by the image capturing unit, and compare the facial features of each of the recognized faces with those of the profile information in the database to find the profile information matching the facial features. In particular, if the profile information matching the facial features is found” in order to perform the comparison, facial recognition on the profile information must be performed; FIG. 2, step S202 and ¶ [0040]);
and determine whether to add the second faceprint to the library of faceprints (¶ [0017] “…if profile information matching the facial features is not found, the processing unit recognizes the business card information of a business card appearing in the image captured by the image capturing unit, and associates the recognized business card information with the recognized face”. FIG. 2, step S212.; ¶ [0044] “…if the profile information matching the facial features is not found, the processing unit 14 further recognizes the business card appearing in the image captured by the image capturing unit 11 to obtain the business card information, and then associates the recognized business card information with the recognized face and writes the association into the database (step S212).” When the newly captured image contains a face that is not in the database, the facial image and the business card are associated with each other and stored in the database).
Regarding claim 10:
Wu discloses the limitations of claim 9 as applied above.
Wu further discloses: where the library of faceprints is associated with a user (¶ [0036], ¶ [0042] – ¶ [0043], and ¶ [0023]. Disclose that the database is pre-established by a user and stores info about “people that the user met” (contacts); therefore associated with the user).
Regarding claim 13:
Wu discloses the limitations of claim 9 as applied above.
Wu further discloses: where the determination is based on a quality of the second faceprint on a quality of the second faceprint (¶¶ [0013], [0019], and [0020] disclose that the enrollment decision is conditional on the quality of that face: face size within a preset range).
Regarding claim 21:
Wu discloses: a smart glasses device, comprising: a frame; a processor disposed about the frame; and a non-transitory computer-readable medium disposed about the frame (FIG. 1, ¶ [0017] “An embodiment of the invention provides smart glasses including an image capturing unit, a storage unit, a display unit, and a processing unit.” The smart glasses implies to have a frame); the non-transitory computer-readable medium comprising instructions that, when executed by the processor, cause the smart glasses device to:
receive a first faceprint from a library of faceprints from another device (¶ [0017] “…The storage unit is configured to store a database, and the database records a plurality of profile information and business card information corresponding to each of the profile information.”; ¶ [0011] “…Then, facial features of each of the recognized faces are compared with those of the profile information in the database to find the profile information matching the facial features”; in order to perform a comparison between facial features and other facial features stored in the database, the facial feature in the database must be retrieved, FIG. 2, step S206 and ¶ 0042]. Nothing in the claim precludes the smart glasses and the other device to be the same device);;
perform targeted facial recognition for the first faceprint (¶ [0017] “…The processing unit is coupled to the image capturing unit, the storage unit, and the display unit, and is configured to recognize at least one face appearing in the image captured by the image capturing unit, and compare the facial features of each of the recognized faces with those of the profile information in the database to find the profile information matching the facial features. In particular, if the profile information matching the facial features is found” in order to perform the comparison, facial recognition on the profile information must be performed; FIG.2, step S204 and ¶ [0041]);
and send a second faceprint to the other device to add to the library of faceprints (¶ [0017] “…if profile information matching the facial features is not found, the processing unit recognizes the business card information of a business card appearing in the image captured by the image capturing unit, and associates the recognized business card information with the recognized face”. FIG. 2, step S212.; ¶ [0044] “…if the profile information matching the facial features is not found, the processing unit 14 further recognizes the business card appearing in the image captured by the image capturing unit 11 to obtain the business card information, and then associates the recognized business card information with the recognized face and writes the association into the database (step S212).”).
Regarding claim 22: the claims limitations are similar to those of claim 10; therefore, rejected in the same manner as applied above.
Regarding claim 27: the claims limitations are similar to those of claim 13; therefore, rejected in the same manner as applied above.
Regarding claim 28:
Wu discloses: a mobile apparatus, comprising: a housing; a processor disposed within the housing; and a non-transitory computer-readable medium disposed within the housing (FIG. 1, ¶ [0017] “An embodiment of the invention provides smart glasses including an image capturing unit, a storage unit, a display unit, and a processing unit.” The smart glasses implies to have a housing); the non-transitory computer-readable medium comprising instructions that, when executed by the processor, cause the mobile apparatus to
obtain a region-of-interest from a smart glasses device, the region-of- interest comprising facial data device (¶ [0017] “…The processing unit is coupled to the image capturing unit, the storage unit, and the display unit, and is configured to recognize at least one face appearing in the image captured by the image capturing unit, and compare the facial features of each of the recognized faces with those of the profile information in the database to find the profile information matching the facial features. In particular, if the profile information matching the facial features is found” in order to perform the comparison, facial recognition on the profile information must be performed; FIG. 2, step S202 and ¶ [0040]; step S206 and ¶ 0042]. Nothing in the claim precludes the smart glasses and the mobile apparatus to be the same device);
determine whether to add a first faceprint to a library of faceprints based on the region-of-interest (¶ [0017] “…if profile information matching the facial features is not found, the processing unit recognizes the business card information of a business card appearing in the image captured by the image capturing unit, and associates the recognized business card information with the recognized face”. FIG. 2, step S212.; ¶ [0044] “…if the profile information matching the facial features is not found, the processing unit 14 further recognizes the business card appearing in the image captured by the image capturing unit 11 to obtain the business card information, and then associates the recognized business card information with the recognized face and writes the association into the database (step S212).” When the newly captured image contains a face that is not in the database, the facial image and the business card are associated with each other and stored in the database). ;
and provide the first faceprint from the library of faceprints to the smart glasses device (¶ [0017] “…The storage unit is configured to store a database, and the database records a plurality of profile information and business card information corresponding to each of the profile information.”; ¶ [0011] “…Then, facial features of each of the recognized faces are compared with those of the profile information in the database to find the profile information matching the facial features”; in order to perform a comparison between facial features and other facial features stored in the database, the facial feature in the database must be retrieved, FIG. 2, step S206 and ¶ 0042]).
Regarding claim 29:
Wu discloses the limitations of claim 28 as applied above.
Wu further discloses: where the region-of-interest comprises extracted facial features (¶ [0041] “…the processing unit 14 recognizes the face appearing in the image via, for instance, the outline of the face, the positions and shapes of facial features, hairstyle, or skin color, and obtains the facial features of each of the faces.”).
Regarding claim 30: the claims limitations are similar to those of claim 13; therefore, rejected in the same manner as applied above.
Regarding claim 32:
Wu discloses the limitations of claim 28 as applied above.
Wu further discloses: process the region-of-interest to obtain metadata (¶ [0044] “…the processing unit 14 further recognizes the business card appearing in the image captured by the image capturing unit 11 to obtain the business card information, and then associates the recognized business card information with the recognized face and writes the association into the database (step S212). The business card information is, for instance, the name of a person or a company, a phone, a fax, an address, a URL, a unified code, an email address, or other personal information obtained via, for instance, the optical character recognition (OCR) of an image captured by the image capturing unit 11, and the invention is not limited thereto.”);
and provide the metadata to the smart glasses device (¶ [0043] “…the processing unit 14 provides the business card information corresponding to the profile information to display on the display unit 13 to prompt the user (step S210), and the display method includes, for instance, directly displaying the image of the business card or displaying the business card information obtained from the business card, and the invention is not limited thereto. Accordingly, the user can see relevant information of people met in the display unit 13 and therefore recognize the person.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 11 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Li (US 20180120594).
Regarding claim 11:
Wu discloses the limitations of claim 9 as applied above.
Wu does not specifically teach: where the library of faceprints is associated with an external database.
However, in the same field of endeavor, Li teaches: where the library of faceprints is associated with an external database (¶ [0020] “a retrieval database, located locally or at a cloud end, the retrieval database storing the first comparison image for comparing with the human face image and the second comparison image for comparing with the foreign language image”; ¶ [0021] “…the retrieval database receives at least one of the following human face images as the first comparison image”; ¶ [0066] “The first comparison image mentioned above is stored in a retrieval database, and the retrieval database may be chosen to be provided locally or at a cloud end. The first comparison image in the retrieval database is from: a human face image which is fully shared, a human face image which is shared within a particular range and within an acquisition permission, a human face image which is received passively and a human face image which is actively photographed.”).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Li by including: where the library of faceprints is associated with an external database in order to reduce the size of the mobile device and make the database content sharable with other devices.
Regarding claim 23: the claims limitations are similar to those of claim 11; therefore, rejected in the same manner as applied above. Note that Li also teaches a smart glasses that have a frame, and another device different than the smart glasses that comprises the database.
Claim(s) 12 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Nelson (US 20190108492).
Regarding claim 12:
Wu discloses the limitations of claim 9 as applied above.
Wu does not specifically teach: where the first faceprint is selected from the library of faceprints based on a scheduled meeting.
However, in the same field of endeavor, Nelson teaches: where the first faceprint is selected from the library of faceprints based on a scheduled meeting (¶ [0347] “FIG. 17C is a block diagram that depicts example contents of meeting information 1732 in the form of a table, where each row corresponds to a particular electronic meeting. In the example depicted in FIG. 17C, meeting information 1732 includes a meeting ID, a meeting name, a meeting location, a date/time for the meeting, participants, and other information”; ¶ [0355] “…IWB appliance 1710 acquires facial images of persons, such as meeting participants without the participation and/or knowledge of the persons. Facial images may be acquired using one or more cameras integrated into IWB appliance 1710, such as cameras 1746, or external sensors, as described in more detail hereinafter. IWB appliance 1710 then attempts to associate the acquired facial images with particular persons. For example, image recognition application 1752 may compare facial images acquired by IWB appliance 1710 to known facial images from databases, records, social media, etc. This may include using meeting participant information. For example, the participants of a meeting may be determined, and then the acquired facial images may be compared to facial images of meeting participants to associate the acquired facial images with a person.”).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Nelson by including: where the first faceprint is selected from the library of faceprints based on a scheduled meeting in order to reduce the amount processing resources, storage resources, networking resources.
Regarding claim 24: the claims limitations are similar to those of claim 12; therefore, rejected in the same manner as applied above.
Claim(s) 14 and 31 re rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Kim (US 20220385617).
Regarding claim 14:
Wu discloses the limitations of claim 9 as applied above.
Wu does not specifically teach: where the determination is based on a salient event that is representative of a larger interaction.
However, in the same field of endeavor, Kim teaches: where the determination is based on a salient event that is representative of a larger interaction (¶ [0063] “…Face recognized Sensor data from camera If a face recognized as belong Output of facial to a database of faces recognition algorithm associated with the HMD user is in a defined region for longer than a threshold time, an In Conversation event is triggered.”; ¶ [0046] – ¶ [0066]: discuss other factor modules (location/movement, interaction logs, etc.) and combining them to detect whether a user is in a real-world conversation. Kim uses these conversation events as salient events to control another system behavior such as notification management).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Kim by including: where the determination is based on a salient event that is representative of a larger interaction in order to provide effective, context-aware notification management at AR devices.
Regarding claim 31: the claims limitations are similar to those of claim 14; therefore, rejected in the same manner as applied above.
Claim(s) 25 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Lee (US 20190213773).
Regarding claim 25:
Wu discloses the limitations of claim 21 as applied above.
Wu further teaches: further cause the smart glasses device to: identify a face in an image (¶ [0017] “…smart glasses including an image capturing unit, a storage unit, a display unit, and a processing unit. In particular, the image capturing unit is configured to capture an image located in the field of view of the smart glasses…recognize at least one face appearing in the image captured by the image capturing unit”).
Wu does not specifically teach: and crop a region-of-interest including facial data from the image.
However, in the same field of endeavor, Lee teaches: and crop a region-of-interest including facial data from the image (¶ [0118] “…FIG. 10A depicts a full size high resolution image captured by the sensor 103, while FIG. 10B depicts a region of interest (ROI) (e.g., the person's face) that is cropped by the image processing module 106 to update the texture of the remote model—which is only a small portion of the overall image.”; ¶ [0052] “…Exemplary computing devices include, but are not limited to, a laptop computer, a desktop computer, a tablet computer, a smart phone, an internet of things (IoT) device, augmented reality (AR)/virtual reality (VR) devices (e.g., glasses, headset apparatuses, and so forth), or the like”)
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Lee by including: and crop a region-of-interest including facial data from the image in order reduce computational and bandwidth requirements when processing/transmitting the face images.
Claim(s) 26 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Lee (US 20190213773) and Petrou (US 20140172881).
Regarding claim 26:
Wu discloses the limitations of claim 21 as applied above.
Wu further teaches: further cause the smart glasses device to: identify a face in an image (¶ [0017] “…smart glasses including an image capturing unit, a storage unit, a display unit, and a processing unit. In particular, the image capturing unit is configured to capture an image located in the field of view of the smart glasses…recognize at least one face appearing in the image captured by the image capturing unit”).
Wu does not specifically teach: identify a face in a plurality of images; select a first image from the plurality of images; and crop a region-of-interest including facial data from the first image.
However, in the same field of endeavor, Petrou teaches: identify a face in a plurality of images (¶ [0042] “The visual query is an image… or a frame or a sequence of multiple frames of a video (206)”; ¶ [0043] “…A visual query can include an image of a person's face, whether taken by a camera embedded in the client system or a document scanned by or otherwise received by the client system.”; ¶ [0160] “…the visual query contains a plurality of faces, such as a picture of two or more friends, or a group photo of several people. In some cases where the visual query comprises a plurality of facial images… prior to identifying potential image matches, the system receives a selection of the respective facial image from the requester. For example, in some embodiments the system identifies each potential face and requests confirmation regarding which face(s) in the query the requester wishes to have identified.”));
select a first image from the plurality of images ¶ [0160] “…the visual query contains a plurality of faces, such as a picture of two or more friends, or a group photo of several people. In some cases where the visual query comprises a plurality of facial images… prior to identifying potential image matches, the system receives a selection of the respective facial image from the requester. For example, in some embodiments the system identifies each potential face and requests confirmation regarding which face(s) in the query the requester wishes to have identified.”):
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Petrou by including: a face in a plurality of images in order provide a variety of search results related to an identified person in the facial image query.
Wu in view of Petrou does not specifically teach and crop a region-of-interest including facial data from the first image.
However, in the same field of endeavor, Lee teaches: and crop a region-of-interest including facial data from the first image (¶ [0118] “…FIG. 10A depicts a full size high resolution image captured by the sensor 103, while FIG. 10B depicts a region of interest (ROI) (e.g., the person's face) that is cropped by the image processing module 106 to update the texture of the remote model—which is only a small portion of the overall image.”; ¶ [0052] “…Exemplary computing devices include, but are not limited to, a laptop computer, a desktop computer, a tablet computer, a smart phone, an internet of things (IoT) device, augmented reality (AR)/virtual reality (VR) devices (e.g., glasses, headset apparatuses, and so forth), or the like”)
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu and Petrou to incorporate the teachings of Lee by including: and crop a region-of-interest including facial data from the image in order reduce computational and bandwidth requirements when processing/transmitting the face images.
Claim(s) 33-34 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20160055371) in view of Kaehler (US 20170351909).
Regarding claim 33:
Wu discloses the limitations of claim 32 as applied above.
Wu further discloses: , further cause the mobile apparatus to identify a matching faceprint based on the region-of-interest (¶ [0017] “…The storage unit is configured to store a database, and the database records a plurality of profile information and business card information corresponding to each of the profile information. The processing unit is coupled to the image capturing unit, the storage unit, and the display unit, and is configured to recognize at least one face appearing in the image captured by the image capturing unit, and compare the facial features of each of the recognized faces with those of the profile information in the database to find the profile information matching the facial features.”; ¶ [0042] “…In particular, the database stored in the storage unit 12 records information related to people that the user met in the past and for whom face recognition and business card recognition are completed, wherein the information includes features such as the outline of the face, positions and shapes of facial features, hairstyle, and skin color.”)
Wu does not specifically teach: further comprising a neural network trained to perform facial recognition from the library of faceprints, where the instructions, when executed by the processor.
However, in the same field of endeavor, Kaehler teaches: further comprising a neural network trained to perform facial recognition from the library of faceprints (¶ [0080] “The object recognitions may be performed using a variety of computer vision techniques… facial recognition (e.g., from a person in the environment or an image on a document),… One or more computer vision algorithms may be used to perform these tasks. Non-limiting examples of computer vision algorithms include:… various machine learning algorithms (such as e.g., support vector machine, k-nearest neighbors algorithm, Naive Bayes, neural network (including convolutional or deep neural networks), or other supervised/unsupervised models, etc.), and so forth”; ¶ [0106] “Feature vectors of the two faces within the image 1200a may be used to compare similarities and dissimilarities between the two faces. For example, the ARD can calculate the distance (such as a Euclidean distance) between the two feature vectors in a corresponding feature vector space. When the distance exceeds a threshold, the ARD may determine the two faces are sufficiently dissimilar. On the other hand, when the distance is below the threshold, the ARD may determine the two faces are similar.”; ¶ [0111] “…The ARD can further look up the identified features in a database to determine whether there are one or more persons matching the identified features.”))
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Wu to incorporate the teachings of Kaehler by including: a neural network trained to perform facial recognition from the library of faceprints in order improve performance and reduce power consumption compared to conventional facial recognition techniques.
Regarding claim 34:
Wu in view of Kaehler discloses the limitations of claim 33 as applied above.
Wu further discloses: where the metadata comprises corresponding contact information associated with the matching faceprint (¶ [0043] “If the profile information matching the facial features is found, the processing unit 14 provides the business card information corresponding to the profile information to display on the display unit 13 to prompt the user (step S210), and the display method includes, for instance, directly displaying the image of the business card or displaying the business card information obtained from the business card, and the invention is not limited thereto. Accordingly, the user can see relevant information of people met in the display unit 13 and therefore recognize the person.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WASSIM MAHROUKA whose telephone number is (571)272-2945. The examiner can normally be reached Monday-Thursday 8:00-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WASSIM MAHROUKA/Primary Examiner, Art Unit 2665