DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 9-10, and 14 of U.S. Patent No. 12260610 (hereinafter reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variants of each other.
Current application
Reference patent
21. (New) A system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
maintaining, in memory, multi-descriptor data comprising, for each object of a plurality of objects, respective object attributes for a respective imaging modality i) from a plurality of imaging modalities and ii) with which a respective image of a respective object was captured;
while using settings for a first imaging modality from the plurality of imaging modalities, detecting a first representation within first image data collected by a camera;
changing the settings of the camera to use a second imaging modality from the plurality of imaging modalities;
while using the settings for the second imaging modality, detecting a second
representation within second image data collected by the camera;
classifying, using the multi-descriptor data, the first representation and the second representation as associated with the same object; and
in response to classifying the first representation and the second representation as associated with the same object, transmitting operational instructions to one or more devices.
22. (New) The system of claim 21, wherein detecting the first representation comprises detecting, using the first image data collected by the camera that comprises an RGB sensor or an infrared sensor with settings for the first imaging modality, the first representation within the first image data.
23. (New) The system of claim 21, wherein changing the settings of the camera to use the second imaging modality is in response to detecting a change in lighting conditions within a physical environment that includes the camera.
24. (New) The system of claim 21, further comprising, between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based on the time of day.
25. (New) The system of claim 21, further comprising: in response to detecting a new object not included in the plurality of objects, generating new multi-descriptor data; and storing, in the memory the new multi-descriptor data.
26. (New) The system of claim 21, wherein at least some object attributes of the same object for each of the first and second imaging modalities are different.
27. (New) The system of claim 21, further comprising:
determining that an object attribute of the same object in each of the first and second imaging modalities is the same; and
in response to determining that the object attribute of the same object in each of the first and second imaging modalities is the same, determining to maintain only one instance of the object attribute that is the same for a first set of object attributes for the first imaging modality and a second set of object attributes for the second imaging modality, wherein: maintaining the multi-descriptor data comprises maintaining, in the multi-descriptor data, only one instance of the object attribute for the first set of object attributes and the second set of object attributes of the same object.
28. (New) One or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising:
maintaining, in memory, multi-descriptor data comprising, for each object of a plurality of objects, respective object attributes for a respective imaging modality i) from a plurality of imaging modalities and ii) with which a respective image of a respective object was captured;
while using settings for a first imaging modality from the plurality of imaging modalities, detecting a first representation within first image data collected by a camera;
changing the settings of the camera to use a second imaging modality from the plurality of imaging modalities;
while using the settings for the second imaging modality, detecting a second
representation within second image data collected by the camera;
classifying, using the multi-descriptor data, the first representation and the second representation as associated with the same object; and
in response to classifying the first representation and the second representation as associated with the same object, transmitting operational instructions to one or more devices.
29. (New) The computer storage media of claim 28, wherein detecting the first representation comprises detecting, using the first image data collected by the camera that comprises an RGB sensor or an infrared sensor with settings for the first imaging modality, the first representation within the first image data.
30. (New) The computer storage media of claim 28, wherein changing the settings of the camera to use the second imaging modality is in response to detecting a change in lighting conditions within a physical environment that includes the camera.
31. (New) The computer storage media of claim 28, further comprising, between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based
on the time of day.
32. (New) The computer storage media of claim 28, further comprising:
in response to detecting a new object not included in the plurality of objects, generating new multi-descriptor data; and
storing, in the memory the new multi-descriptor data.
33. (New) The computer storage media of claim 28, wherein at least some object attributes of the same object for each of the first and second imaging modalities are different.
34. (New) The computer storage media of claim 28, further comprising:
determining that an object attribute of the same object in each of the first and second imaging modalities is the same; and
in response to determining that the object attribute of the same object in each of the first and second imaging modalities is the same, determining to maintain only one instance of the object attribute that is the same for a first set of object attributes for the first imaging modality and a second set of object attributes for the second imaging modality, wherein:
maintaining the multi-descriptor data comprises maintaining, in the multi-descriptor data, only one instance of the object attribute for the first set of object attributes and the second set of
object attributes of the same object.
35. (New) A computer-implemented method comprising:
maintaining, in memory, multi-descriptor data comprising, for each object of a plurality of objects, respective object attributes for a respective imaging modality i) from a plurality of imaging modalities and ii) with which a respective image of a respective object was captured;
while using settings for a first imaging modality from the plurality of imaging modalities, detecting a first representation within first image data collected by a camera;
changing the settings of the camera to use a second imaging modality from the plurality of imaging modalities;
while using the settings for the second imaging modality, detecting a second representation within second image data collected by the camera;
classifying, using the multi-descriptor data, the first representation and the second representation as associated with the same object; and
in response to classifying the first representation and the second representation as associated
with the same object, transmitting operational instructions to one or more devices.
36. (New) The method of claim 35, wherein detecting the first representation comprises detecting, using the first image data collected by the camera that comprises an RGB sensor or an infrared sensor with settings for the first imaging modality, the first representation within the first image data.
37. (New) The method of claim 35, wherein changing the settings of the camera to use the second imaging modality is in response to detecting a change in lighting conditions within a physical environment that includes the camera.
38. (New) The method of claim 35, further comprising, between collecting first and image data, determining a time of day,
wherein changing the settings of the camera to use the second imaging modality is based on the time of day.
39. (New) The method of claim 35, further comprising:
in response to detecting a new object not included in the plurality of objects, generating new multi-descriptor data; and
storing, in the memory the new multi-descriptor data.
40. (New) The method of claim 35, wherein at least some object attributes of the same object for each of the first and second imaging modalities are different.
1. A system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
generating dual descriptor data, which comprises: determining a first bounding box in first image data collected by a camera, determining a second bounding box in second image data collected by the camera, determining an overlapping region between the first bounding box and the second bounding box, and generating the dual descriptor data for an object associated with the overlapping region; (this step together with other steps below which further recite using the generated dual descriptor data for classifying objects implies the step of “maintaining, in memory, multi-descriptor data …” recited in claim 21 of current application)
detecting, using a first set of descriptor features included in the dual descriptor data, a first representation within first image data collected by the camera with a first imaging modality;
determining a change to an imaging modality of the camera from the first imaging modality to a second imaging modality;
detecting, using a second, different set of features included in the dual descriptor data, a second representation within second image data collected by the camera with the second imaging modality;
classifying the first representation and the second representation as associated with a same object using the dual descriptor data; and
in response to classifying the first representation and the second representation as associated with the same object using the dual descriptor data, transmitting operational instructions to one or more appliances connected to the system.
2. The system of claim 1, wherein the camera comprises an RGB sensor and an IR sensor, the change to the imaging modality of the camera comprises a change in using the IR sensor to the RGB sensor or in using an RGB sensor to an IR sensor.
3. The system of claim 1, wherein the operations further comprise detecting, by the camera, a change in lighting conditions; and wherein determining the change to the imaging modality of the camera is in response to detecting the change in the lighting conditions.
See claim 1 of reference patent. However, claim 1 of reference patent does not recite “between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based on the time of day.”
Official Notice is taken that “changing imaging modality based on a time of day” is well known in the art, e.g. using IR imaging at night time while using visible light imaging at day time to take advantages and avoid disadvantages of each of the imaging modalities during respective time of day (visible light imaging is good at day time while being poor in quality at night time; and IR imaging works better at night time compared to visible light imaging).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate “between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based on the time of day” into the system recited in claim 1 of reference patent to enhance the reliability of the system.
9. The system of claim 1, wherein the detecting, using the second set of features specified in the dual descriptor data, of the second representation within second image data comprises: determining that a descriptor template is not specified for the second image data; in response to determining that a descriptor template is not specified for the second image data, initiate a bounding box registration process, thereby generating a new descriptor template; and updating the dual descriptor data with the new descriptor template.
10. The system of claim 1, wherein the dual descriptor data for the object comprises object attributes in different imaging modalities with the object.
14. The system of claim 1, wherein the operations further comprise:
determining that the object is recognized to the system; comparing the generated dual descriptor data for the object with a feature template of the dual descriptor data; (this step implies the determining step recited in the current application because: determining that the object is recognized by the same means “determining that at least an object attribute of the same object”
and
updating the feature template of the dual descriptor data with the generated dual descriptor data. (this step implies that only one copy of dual descriptor data is maintained because the feature template of the dual descriptor data is updated with the generated dual descriptor data)
Claim 1 of reference patent accommodates this claim because it recites “one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations.”
See claim 2 of reference patent above.
See claim 3 of reference patent above.
See rejection of claim 24 of current application above.
See claim 9 of reference patent above.
See claim 10 of reference patent above.
See rejection of claim 27 of current application above.
Claim 1 of reference patent accommodates the scope of claim 30 of current application.
See claim 2 of reference patent above.
See claim 3 of reference patent above.
See rejection of claim 24 of current application above.
See claim 9 of reference patent above.
See claim 10 of reference patent above.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-22, 25-29, 32-36, and 39-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Terre et al. (US 2015/0358557 A1 – hereinafter Terre).
Regarding claim 21, Terre discloses a system (Fig. 12 - system 1201) comprising one or more computers (Fig. 12 – one or more processor 195) and one or more storage devices on which are stored instructions that are operable ([0035]; Fig. 12 – memory 196), when executed by the one or more computers, to cause the one or more computers to perform operations ([0035] – software instructions stored in memory 196 executed by processor 195) comprising: maintaining, in memory, multi-descriptor data comprising, for each object of a plurality of objects, respective object attributes for a respective imaging modality i) from a plurality of imaging modalities and ii) with which a respective image of a respective object was captured ([0141]; [0152]; [0156]-[0157]; [0164]; [0170] – maintaining, in memory, multi-descriptor data for each of users, animate objects, and inanimate objects, object features for thermal imaging modality and non-thermal imaging modality); while using settings for a first imaging modality from the plurality of imaging modalities, detecting a first representation within first image data collected by a camera (Fig. 14 – steps 1400-1406 – while using settings for non-thermal imaging modality, i.e. using non-thermal imaging module 1200 as described at least in [0147] and illustrated in Fig. 12, detecting a non-thermal representation of an object within non-thermal image data collected by camera 1201); changing the settings of the camera to use a second imaging modality from the plurality of imaging modalities ([0147]; Fig. 14 – steps 1412-1418 – changing the settings of the camera 1201 so that thermal imaging module 100 is used as described at least in [0144]-[0145] and illustrated in Fig. 12); while using the settings for the second imaging modality, detecting a second representation within second image data collected by the camera (Fig. 14 – steps 1400-1406 – while using settings for thermal imaging modality, i.e. using thermal imaging module 100 as described at least in [0144]-[0145] and illustrated in Fig. 12, detecting a thermal representation of an object within thermal image data collected by camera 1201); classifying, using the multi-descriptor data, the first representation and the second representation as associated with the same object (Fig. 14; [0170] - determining that the extracted thermal identifying features match the stored thermal identifying features of the recognized object of block 1406 and thereby verifying that the object is the recognized object, thus being the same object recognized in block 1406); and in response to classifying the first representation and the second representation as associated with the same object, transmitting operational instructions to one or more devices ([0171]-[0173]; Fig. 14 – taking actions at step 1422 by transmitting operational instructions to one or more device, i.e. authenticating an authorized user, providing access to a secure system, and/or alerting security personnel, or transmitting operational instructions to one or more devices to process the captured images such as combining the images etc.).
Regarding claim 22, Terre also discloses the system of claim 21, wherein detecting the first representation comprises detecting, using the first image data collected by the camera that comprises an RGB sensor or an infrared sensor with settings for the first imaging modality, the first representation within the first image data ([0147]-[0148] – at least a RGB sensor).
Regarding claim 25, the system of claim 21, further comprising: in response to detecting a new object not included in the plurality of objects, generating new multi-descriptor data; and storing, in the memory the new multi-descriptor data ([0157]; [0164]; [0181] – detecting a person as new person, storing thermal and non-thermal features for the person to set up an account for the person).
Regarding claim 26, Terre also discloses the system of claim 21, wherein at least some object attributes of the same object for each of the first and second imaging modalities are different ([0164] – visual features such as images of a face, a fingerprint, eye, etc. are different from thermal features such as temperature variations on person’s face etc. described at least in [0156]).
Regarding claim 27, Terre also disclose the system of claim 21, further comprising: determining that an object attribute of the same object in each of the first and second imaging modalities is the same (Fig. 14 – each of steps 1406 and 1418 determines that object attributes in each of the imaging modalities is the same as stored attributes for the object); and in response to determining that the object attribute of the same object in each of the first and second imaging modalities is the same, determining to maintain only one instance of the object attribute that is the same for a first set of object attributes for the first imaging modality and a second set of object attributes for the second imaging modality, wherein: maintaining the multi-descriptor data comprises maintaining, in the multi-descriptor data, only one instance of the object attribute for the first set of object attributes and the second set of object attributes of the same object (Fig. 14 – by not changing the stored features).
Claim 28 is rejected for the same reason as discussed in claim 21 above in view of Terre also disclosing one or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform the recited operations (Fig. 12; [0035] – memory 196 storing software instructions executed by processor 195 to perform the operations as discussed in claim 1 above).
Claim 29 is rejected for the same reason as discussed in claim 22 above.
Claim 32 is rejected for the same reason as discussed in claim 25 above.
Claim 33 is rejected for the same reason as discussed in claim 26 above.
Claim 34 is rejected for the same reason as discussed in claim 27 above.
Claim 35 is rejected for the same reason as discussed in claim 21 above.
Claim 36 is rejected for the same reason as discussed in claim 22 above.
Claim 39 is rejected for the same reason as discussed in claim 25 above.
Claim 40 is rejected for the same reason as discussed in claim 26 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 23-24, 30-31, and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Terre as applied to claims 21-22, 25-29, 32-36, and 39-40 above, and further in view of Joao et al. (US 2014/0085445 A1 – hereinafter Joao).
Regarding claim 23, Terre also discloses the system of claim 21, wherein changing the settings of the camera to use the second imaging modality in poor lighting conditions within a physical environment that includes the camera ([0152]; [0207]).
However, Terre does not disclose changing the settings of the camera to use the second imaging modality is in response to detecting a change in lighting conditions within a physical environment that includes the camera.
Joao discloses changing settings of a camera to use a second imaging modality is in response to detecting a change in lighting conditions within a physical environment that includes the camera ([0013]; [0110] – detecting darkness, changing settings of a camera 1 to activate infrared camera 4B as further described at least in [0112], detecting daylight, changing settings of the camera 1 to activate camera 4).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Joao into the system taught by Terre to automatically set the camera into optimized modes based on lighting conditions.
Regarding claim 24, Terre also discloses the system of claim 21, further comprising, between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based on the time of day.
However, Terre does not disclose between collecting first and image data, determining a time of day, wherein changing the settings of the camera to use the second imaging modality is based on the time of day.
Joao discloses between collecting first and image data, determining a time of day, wherein changing settings of a camera to use a second imaging modality is based on the time of day ([0013]; [0110] – detecting darkness, changing settings of a camera 1 to activate infrared camera 4B as further described at least in [0112], detecting daylight, changing settings of the camera 1 to activate camera 4).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Joao into the system taught by Terre to automatically set the camera into optimized modes based on lighting conditions.
Claim 30 is rejected for the same reason as discussed in claim 23 above.
Claim 31 is rejected for the same reason as discussed in claim 24 above.
Claim 37 is rejected for the same reason as discussed in claim 23 above.
Claim 38 is rejected for the same reason as discussed in claim 24 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG Q DANG/Primary Examiner, Art Unit 2484