DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Amendment
The preliminary amendment to the specification and claims field on 08/05/2024 have been acknowledged.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) [1-2, 7, 10-12, 17 and 19] is/are rejected under 35 U.S.C. 102 (a2) as being anticipated by Andrews(US. 2021/0329441).
Reclaim[1], Anderson discloses an anti-snooping prompt method (see for example fig. 12), performed by a wearable device (see 100 fig, 2 and ¶0035, augmented reality devices as disclosed in the text of ¶0035, when the computing device is for example augmented reality devices as disclosed in the text of ¶0035), and comprising: obtaining environmental image data by a camera apparatus arranged on the wearable device (see ¶0040, In the present embodiment, computing device 100 may comprise a camera module 145 capable of capturing still image and video and a device flash light module 140 capable of flashing light (continuously on light or intermittent flashing, as required) in sync with the operation of the camera module 145); determining, in response to determining that a photographic device is present in the environmental image data, a photographing angle of the photographic device (see ¶¶0046 and 0047, finding of any reflection from a lens that resembles reflection of a covert spying device camera lens. For example, as shown in the user interface 401 of device 100 in FIG. 4, the GUI module 150 may augment/superimpose a crosshair on the user interface 401 over the suspected object as in step 1218, [ by the virtue of detecting a reflection from a lens covert camera, which directly implies a photographic field of view or angle of a covert camera, as stated in the text of paragraph 0047]); and providing an anti-snooping prompt by the wearable device in response to determining, according to the photographing angle, that a user is located within a photographing range of the photographic device (see ¶0047, Image recognition involves both finding of resemblance of any scanned object with an already known covert spying device and finding of any reflection from a lens that resembles reflection of a covert spying device camera lens. For example, as shown in the user interface 401 of device 100 in FIG. 4, the GUI module 150 may augment/superimpose a crosshair on the user interface 401 over the suspected object as in step 1218 as an indicator in the report, [ by the virtue of detecting a reflection from a lens covert camera, which directly implies a photographic field of view or angle of a covert camera, as stated in the text of paragraph 0047]) .
Reclaim[2], Anderson further discloses, comprising: determining, by identifying the environmental image data, whether a preset outline corresponding to the photographic device is present in the environmental image data; and determining, on a condition that the preset outline is present in the environmental image data, that the photographic device is present in the environmental image data (see ¶0047, if the object 302 is found less likely to be a covert camera than object 304, the color of crosshair 402 superimposed on the object 302 can be blue and crosshair 404 superimposed on object 304 can be red in color).
Reclaim[7], Anderson further discloses, comprising: determining, by identifying the environmental image data, whether a body feature is present in the environmental image data; determining, on a condition that the body feature is present in the environmental image data, a body posture corresponding to the body feature; and determining, on a condition that the body posture is a photographing posture, that the photographic device is present in the environmental image data (see ¶0048, The image recognition steps adopted by the present invention may include preparation of the training data (for example, labeling of real pinhole camera in reflections as true and false pinhole camera as false), creating the deep learning model, training the model (i.e. fit the model to the training data) and evaluation of model accuracy on a test validation dataset of images).
Reclaim[10], except a few changes in wording has substantially same limitation as claim [1] above, and thus analyzed and rejected by the same reasoning.
Reclaim [11] claim [11] is a program for performing steps of claim [1] and acts or function of claim [10], and thus analyzed and rejected by the same reasoning.
Reclaim [12], except its dependency has substantially same limitation as claim [2] above, and thus analyzed and rejected by the same reasoning.
Reclaim [17], except its dependency has substantially same limitation as claim [7] above ,and thus analyzed and rejected by the same reasoning.
Claim [19] is a program for performing steps of claim [2] and functions of claim [12], and thus analyzed and rejected by the same reasoning.
Allowable Subject Matter
Claim[3-6, 8, 13-16, 18 and 20-21] objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Reclaim [3] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein determining the photographing angle of the photographic device comprises: determining a lens steering angle of the photographic device in the environmental image data; and determining the photographing angle of the photographic device according to the preset outline and the lens steering angle.
Claim [4] is allowed due to its dependency on claim [3].
Reclaim [5] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: simulating, according to the environmental image data, a first photographing range of the wearable device in a virtual scenario; simulating, according to the photographing angle, a second photographing range of the photographic device in the virtual scenario; and determining, in a condition that an overlap region exists between the first photographing range and the second photographing range in the virtual scenario, that the user is located within the photographing range of the photographic device.
Reclaim [6] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein the wearable device is a pair of smart glasses, and providing the anti-snooping prompt by the wearable device comprises: determining position coordinates of the photographic device relative to the smart glasses; and providing, according to the position coordinates, the anti-snooping prompt on screens of the smart glasses.
Reclaim [8] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein obtaining the environmental image data comprises: obtaining initial environmental image data by the camera apparatus; determining, according to the initial environmental image data, a scenario type of an environment where the wearable device is located; and obtaining denoised environmental image data by performing denoising processing on the initial environmental image data according to the scenario type.
Reclaim [13] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein determining the photographing angle of the photographic device comprises: determining a lens steering angle of the photographic device in the environmental image data; and determining the photographing angle of the photographic device according to the preset outline and the lens steering angle.
Claim [14] is allowed due to its dependency on claim [13].
Reclaim [15] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein the one or more processors are collectively configured to: simulate, according to the environmental image data, a first photographing range of the wearable device in a virtual scenario; simulate, according to the photographing angle, a second photographing range of the photographic device in the virtual scenario; and determine, on a condition that an overlap region exists between the first photographing range and the second photographing range in the virtual scenario, that the user is located within the photographing range of the photographic device.
Reclaim [16] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein the electronic device is a pair of smart glasses, and providing the anti-snooping prompt by the wearable device comprises: determining position coordinates of the photographic device relative to the smart glasses; and providing, according to the position coordinates, the anti-snooping prompt on screens of the smart glasses.
Reclaim [18] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein obtaining the environmental image data comprises: obtaining initial environmental image data by the camera apparatus; determining, according to the initial environmental image data, a scenario type of an environment where the wearable device is located; and obtaining denoised environmental image data by performing denoising processing on the initial environmental image data according to the scenario type.
Reclaim [20] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein determining the photographing angle of the photographic device comprises: determining a lens steering angle of the photographic device in the environmental image data; and determining the photographing angle of the photographic device according to the preset outline and the lens steering angle.
Reclaim [21] none of the prior arts on the record either singularly or in combination teaches or reasonably suggests: wherein determining the lens steering angle of the photographic device in the environmental image data comprises: determining, by capturing a camera of the photographic device in the environmental image data, a lens size and a lens surface angle of the camera; determining a preset photographing range of the camera according to the lens size; and determining, according to the preset photographing range and the lens surface angle, the lens steering angle of the photographic device.
Examiner note: claim [9] is previously cancelled.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Han (US. PAT. No. 12,052,794) discloses: . a camera detection model can be trained using data comprising captured video data packets from pinhole (or other types of hidden/digital wireless) cameras. These data packets may reflect particular uplink/downlink throughput, burst rate, etc. that are indicative of captured video being wirelessly transmitted from a hidden camera to a remote server, datastore, or other device. In particular, a machine learning model (such as a decision tree model) can be generated and trained using video data packets, the characteristics of which can be used to identify video being transmitted from hidden cameras by analyzing data packet streams. In col. 2 lines 45-55.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A BERHAN whose telephone number is (571)270-5094. The examiner can normally be reached 9:00Am-5:00pm (MAX- Flex).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED A BERHAN/Primary Examiner, Art Unit 2639