DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
2. This communication is in response to the application filled on 09/05/2023.
3. Claims 1-36 are pending.
4. Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations.
Drawings
5. The drawings are objected to under 37 CFR 1.83(a) because they fail to show Fig. 7 as described in the specification par. [0045] and [0059-0062]. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
6. The use of the terms BLUETOOTH, MySQL, NoSQL, MongoDB, Java, JavaScript, in specifications par. [0065], [0069], and [0072], which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Claim Objections
7. Claims 5, 16, and 30 are objected to because of the following informalities:
Claims 5, 16, and 30 recite “information related to an identify of an individual.”. Consider correction to “information related to an identity of an individual”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
10. Claims 1-6, 8, 10-17, 19, 21-23, 25-31, 33, and 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2021/0374373 to Astvatsaturov et al. (hereinafter Astvatsaturov) in view of U.S. Publication No. 2023/0342487 to Joseph et. al. (hereinafter Joseph).
11. Regarding Claim 1, Astvatsaturov discloses a method ([par. 0003, ln. 2-7] “…a method of barcode scanning using a barcode reader. The method includes capturing, using a two-dimensional (2D) imaging apparatus within the barcode reader and having a first field of view (FOV), a 2D image of a first environment appearing within the first FOV and storing 2D image data corresponding to the 2D image...”), comprising:
capturing, with an indicia reader, an image data of a field of view (FOV) associated with an imager within the indicia reader ([par. 0003, ln. 2-7], [Fig. 1] see dotted line from camera 107 to product 122 and barcode 124, [par. 0105, ln. 1-17] “…FIG. 1, as well as FIGS. 2-4, the barcode reader 106 captures images of an object, in particular a product 122… scanned by a user 108… 106 captures these images of… 122 through one of the first and second optically transmissive windows 118, 120… image capture may be done by positioning… 122 within the fields of view FOV of the digital imaging sensor(s) housed inside… 106… 106 captures images through… 118, 120 such that a barcode 124 associated with… 122 is digitally read…”);
determining, with the indicia reader, a facial {area} within the image data ([Fig. 13-16], [par. 0175, ln. 13-21] “At a block 704, the imaging system performs facial recognition on the 3D image data identifying the presence of facial data within the environment. The facial recognition may be performed by examining point cloud data… identifying geometric features and comparing them to a trained object identification model, to a 3D anthropometric data model stored at a scanning station or at a remote server, or to other models for identifying facial features.”, [par. 0179, ln. 1-14] “FIG. 14A illustrates an example 3D image of a first environment of a FOV of the 3D imaging apparatus. FIG. 14B illustrates… a second environment… FIG. 14A, a face 750 is recognized at a first distance adjacent a platter 752 of a symbology reader… FIG. 14B, a face 754 is recognized at a second distance far from the platter 752. Based on the identification of the facial data at the block 704, for the 3D image capturing the environment in FIG. 14A, the block 706, identifying a face, such as the face of a child standing at bi-optic reader height, may adjust an operating parameter of a 2D imaging apparatus to protect the person from being illuminated by light from an illumination assembly.”, [par. 0180, ln. 1-9] “A similar process may be performed using 2D image data and facial recognition. FIG. 15 is a flowchart showing an example process 780. At block 782, 2D images are captured by a 2D imaging assembly, and at a block 784 facial recognition is performed on the 2D images… imaging processor may be configured to use a 2D anthropometric data model stored at a scanning station to determine if edge data or other contrast identified in the 2D image corresponds to facial data.”);
{producing, with the indicia reader, an anonymized image data by altering pixel data of the facial area of the image data}; and
(i) storing {the anonymized} image data in a nonvolatile memory of the indicia reader ([par. 0003, ln. 2-13] “… storing 2D image data corresponding to the 2D image… The method further includes… storing 3D image data corresponding to the 3D image.”, [par. 0186, ln. 1-15] “…system 1000 includes a memory (e.g., volatile memory, non-volatile memory) 1008 accessible by the image processors 1006 (e.g., via a memory controller)…”), or (ii) transmitting {the anonymized} image data from the indicia reader to one or more host processors ([par. 0134, ln. 1-9] “…the remote server 350 is an image processing and object identification server configured to receive 2D images (2D image data) and 3D images (3D image data) (and optionally other image scan data, such as decoded indicia data, physical features, etc.) from the scanning station 302 and perform object identification such as object identification and improper object detection and other techniques described herein, including at least some of the processes described in reference to FIGS. 8-16.”).
Astvatsaturov does not specifically disclose identifying a facial “area”, though one of ordinary skill in the art would recognize Astvatsaturov does disclosed identifying a face within the image data. Astvatsaturov does not specifically disclose anonymized image data by altering pixel data of the facial area, though one of ordinary skill in the art would recognize that Astvatsaturov does disclosed analogous storage of image data in the nonvolatile memory of the indicia reader and transmitting the image data from the indicia reader to one or more host processors.
However, Joseph teaches producing anonymized image data by altering pixel data of the facial area of the image data ([Fig. 6A-D], [par. 0039, ln. 1-24] “…systems may modify the image data to protect the privacy of the user and/or other persons who are depicted or otherwise represented in the image data… may blur or pixelate a person's face in the image to protect the privacy of the person (e.g., see FIG. 6A)… may cover a person's face in the image (e.g., with a black box or a cartoony avatar face) to protect the privacy of the person (e.g., see FIGS. 6B-6C)… may use inpainting to effectively remove a person and/or their face from the image to protect the privacy of the person (e.g., see FIG. 6D)…”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Astvatsaturov and Joseph as within the same field of image processing of images including people in a public setting, and as analogous to the claimed invention. The motivation to combine is disclosed in Joseph, in that anonymizing image data of people is a pre-requisite depending on laws/regulations of a given country and/or region, and furthermore that it protects privacy of users within an image ([par. 0038, ln. 5-34] “…sending of image data that may include representations of users and/or of other persons in an environment raises privacy concerns, as some of those persons might not want for an image of them (e.g., of their faces and/or other portion(s) of their respective bodies) to be captured and/or shared by the network-based interactive systems… a user may be using a… system outdoors, in a coffeeshop, in a store, at home, at an office, or at a school… the camera(s) of the… system may end up capturing image data of other persons besides the user, for instance as those persons may end up walking into the field of view of the camera(s), or the field of view of the camera(s) may move (e.g., as the user moves their head while wearing an HMD that is part of the network-based interactive system) to include the persons. For certain persons, such as children, privacy laws in certain countries or regions may prohibit or otherwise regulate image capture and/or sharing using such network-based interactive systems... the user himself/herself may not want his/her image to be shared using a… system, for instance if the user has not yet gotten ready for the day, is having a bad hair day, is feeling sick or unwell, is wearing an outfit (e.g., pajamas) that they would prefer not to share, is eating while using the… system…”). Specifically, given that the method of Astvatsaturov may be within an analogous retail environment ([par. 0001, ln. 1-15] “…many warehousing, distribution, and retail environments, these symbology readers capture 2D image data which is used to conduct barcode decoding operations... In other instances, the bi-optic reader may use 2D image data to help determine the nature of the product being scanned.”) one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that the privacy concerns and regulation requirements (depending on the region wherein the scanner is located) would be equally as applicable to the method of Astvatsaturov. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the method of Astvatsaturov with the anonymization of image data in Joseph through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 1.
12. Regarding Claim 2, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Joseph teaches wherein the altering the pixel data comprises: (i) removing color data of the pixel data, or (ii) replacing the color data of the pixel data with color data of a predetermined color ([Fig. 6A-D], [par. 0039, ln. 1-24]). Specifically, one of ordinary skill in the art would recognize that the black box is effectively identical to removing color data of the pixel data, and/or replacing the color data of the pixel data with color data of a predetermined color. Furthermore, inpainting could also be understood to be analogous to the above-described limitations. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 2.
13. Regarding Claim 3, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Joseph teaches wherein the altering the pixel data comprises: (i) changing an intensity of the pixel data, or (ii) replacing pixel data to create an anonymizing graphic ([Fig. 6A-D], [par. 0039, ln. 1-24]). Specifically, one of ordinary skill in the art would recognize that the black box is effectively identical changing an intensity of the pixel data to 0 for the respective box, and the face decal of Joseph as analogous to replacing pixel data to create an anonymizing graphic. Therefore, it would have been obvious to one of orindary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 3.
14. Regarding Claim 4, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Joseph teaches wherein the determining the facial area comprises determining at least one facial feature within the facial area ([Fig. 7], [par. 0068, ln. 1-52] “The face detector 240 can detect, extract, recognize, and/or track features of the face, body, object(s), and/or portions of the environment in order to detect the face of the person… 240 detects the face of the person by first detecting the body of the person, and then detecting the face based on the expected position of the face within the structure of the body… 240 detects the face of the person by inputting the image 235 into one or more of the one or more trained machine learning (ML) model(s) 277 discussed herein, and receiving an output indicating the face's position and/or orientation… 277 can be trained… for use by… 240 using training data that includes images that include faces for which positions and/or orientations of the faces are pre-determined… 240 detects a position of the face within the image 235 (e.g., pixel coordinates), a position of the face within the environment (e.g., 3D coordinates within the 3D volume of the environment), an orientation (e.g., pitch, yaw, and/or roll) of the face within the image 235 (e.g., along axes about which rotation is visible in the image 235), and/or an orientation (e.g., pitch, yaw, and/or roll) of the face within the environment… the pose (e.g., position and/or orientation) of the face in the environment can be based on how a distance between two features on the face (e.g., an inter-eye distance) in the image 235 compares to a reference distance (e.g., inter-eye distance) for an average human being… 240 detects the face of the person using feature detection, feature extraction, feature recognition, feature tracking… some of the sensor(s) 230 face the eye(s) of the user 215, and… 240 can detect the face in the image 235 based on gaze detection of the gaze of the eye(s) of the user 215… Within FIG. 2A, a graphic representing… 240 illustrates the image 235 with a bounding box around a face of a person depicted in the image 235, with a zoomed-in version of the face of the person illustrated extending from the box…”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 4.
15. Regarding Claim 5, a combination of Astvatsaturov and Joseph teaches the method of claim 4. It would have been obvious to one of ordinary skill in the art, in view of the motivation to anonymize as disclosed in Joseph ([par. 0038, ln. 5-34] see specifically “For certain persons, such as children, privacy laws in certain countries or regions may prohibit or otherwise regulate image capture and/or sharing using such network-based interactive systems…”) to further not put the image data through at least one of a facial recognition module or a person recognition module configured to provide information related to an identity of an individual. Specifically, the examiner notes that while Joseph does perform profiling of individuals in images ([par. 0069, ln. 1-15] “…profile identifier 245… 245 retrieves a profile of the person whose face was detected by the face detector 240 from a data store 290. If the person does not already have a profile in the data store 290,… 245 can create (e.g., generate) a profile for the person… 245 and/or the face detector 240 use facial recognition on the face of the person, and/or person recognition on the body of the person, to recognize an identifier for the person (e.g., name, email address, phone number, mailing address, number, code, etc.)”), the motivation provided by Joseph notes that the anonymization must comply with the regulations of whatever region the imaging is occurring in ([par. 0038, ln. 5-34]). In such a case, a region that bans and/or restricts the use of “facial recognition” and/or “person-recognition” would require that the image data not be put through a “facial recognition” or “person recognition” module to comply with the regulations. An example would be the City of Portland ([City Code, Title 34 Digital Justice, Chapter 34.10, 34.10.010 Purpose, par. 1, ln. 1 to 34.10.030 Prohibition, par. 1, ln. 2] “…The purpose of this Chapter is to prohibit the use of Face Recognition Technologies in Places of Public Accommodation by Private Entities within the boundaries of the City of Portland…A. “Face Recognition” means the automated searching for a reference image in an image repository by comparing the facial features of a probe image with the features of images contained in an image repository (one-to-many search). A Face Recognition search will typically result in one or more most likely candidates—or candidate images—ranked by computer-evaluated similarity or will return a negative result… B. “Face Recognition Technologies” means automated or semi-automated processes using Face Recognition that assist in identifying, verifying, detecting, or characterizing facial features of an individual or capturing information about an individual based on an individual's face… D. “Places of Public Accommodation” 1. means: Any place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise.2. does not include: An institution, bona fide club, private residence, or place of accommodation that is in its nature distinctly private... Except as provided in the Exceptions section below, a Private Entity shall not use Face Recognition Technologies in Places of Public Accommodation within the boundaries of the City of Portland.”). One of ordinary skill in the art would specifically recognize that analogous restrictions are applicable to other regions. For example, Illinois BIPA law restricts the use of “facial geometry” which would be particularly relevant to the 3D camera in Astvatsaturov which would be classified under a biometric identifier ([pg. 1, (740 ILCS 14/10) Sec. 10. Definitions, par. 1, ln. 1-3] “"Biometric identifier" means a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. Biometric identifiers do not include writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color…”, [pg. 2, (740 ILCS 14/15) Sec. 15. Retention; collection; disclosure; destruction, (b), ln. 1-9] “(b) No private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person's or a customer's biometric identifier or biometric information, unless it first: (1) informs the subject or the subject's legally authorized representative in writing that a biometric identifier or biometric information is being collected or stored; (2) informs the subject or the subject's legally authorized representative in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and (3) receives a written release executed by the subject of the biometric identifier or biometric information or the subject's legally authorized representative.”). One of ordinary skill in the art of image analysis of images taken in public spaces would have been familiar with such regulations. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the combination of Astvatsaturov and Joseph to exclude facial recognition and/or person recognition on the image data so as to comply with local regulations.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph, and to further modify the combination of Astvatsaturov and Joseph with knowledge inherent to one of ordinary skill in the art to exclude performing facial recognition or person recognition on the image data, to obtain the invention as specified in claim 5.
16. Regarding Claim 6, a combination of Astvatsaturov and Joseph teaches the method of claim 4. Joseph teaches wherein the determining the facial area comprises applying a predetermined pixel area around a determined location of the at least one facial feature ([par. 0068, ln. 1-52], [Fig. 2A-B]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 6.
17. Regarding Claim 8, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Joseph teaches wherein the determining the facial area comprises determining a non-facial feature associated with a human body within the image data, and determining the facial area to be an area positioned at a predetermined positional relationship relative to the non-facial feature ([par. 0068, ln. 1-52] “…240 detects the face of the person by first detecting the body of the person, and then detecting the face based on the expected position of the face within the structure of the body…”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 8.
18. Regarding Claim 10, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Astvatsaturov further discloses wherein the image data is: (i) pre-anonymized image data received from the imager within the indicia reader ([par. 0105, ln. 1-17]), {and (ii) not stored on the nonvolatile memory of the indicia reader}.
Astvatsaturov does not specifically disclose wherein the image data is not stored on nonvolatile memory of the indicia reader. Specifically, Astvatsaturov uses either volatile or nonvolatile memory to store the images ([par. 0003, ln. 2-13], [par. 0186, ln. 1-15]). Likewise, Jospeh does not teach wherein the image data is: (i) pre-anonymized image data received from the imager within the indicia reader, and (ii) not stored on the nonvolatile memory of the indicia reader.
However, one of ordinary skill in the art, before the effective filling date of the claimed invention, would have recognized arguments analogous to claim 5 are further applicable to claim 10. Specifically, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to not store the image data on the nonvolatile memory to comply with, for example, Illinois BIPA ([pg. 2, (740 ILCS 14/15) (b), ln. 1-8 and (e), ln. 1-8] “…(1) informs the subject or the subject's legally authorized representative in writing that a biometric identifier or biometric information is being collected or stored; (2) informs the subject or the subject's legally authorized representative in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and… (e) A private entity in possession of a biometric identifier or biometric information shall: (1) store, transmit, and protect from disclosure all
biometric identifiers and biometric information using the reasonable standard of care within the private entity's industry; and (2) store, transmit, and protect from disclosure all biometric identifiers and biometric information in a manner that is the same as or more protective than the manner in which the private entity stores, transmits, and protects other confidential and sensitive information.”). Specifically, one of ordinary skill in the art would recognize that by not storing the image data on the nonvolatile memory you prevent retention of the non-anonymized image data, since the image data would be stored only on the volatile memory of the processor when executing the anonymization. One of ordinary skill in the art would recognize that this protect the biometric information present in the image data since the volatile memory is typically only retained so long as the computer has power and the information in said memory section is being used by the processor for tasks (see also “Implications of securing data in RAM” in 892). As such, the sensitive, non-anonymized image data would only be retained until anonymization is complete, thus preventing concerns with regard to security of the image data prior to anonymization.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 10.
19. Regarding Claim 11, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Astvatsaturov specifically discloses storing the image data in at least one volatile memory of the indicia reader to execute the determining the facial location ([par. 0003, ln. 2-13], [par. 0186, ln. 1-15]). Specifically, one of ordinary skill in the art would recognize that Astvatsaturov stores the “image data” before anonymization in either a volatile or non-volatile memory 1008 for which the processor 1006 performs operations. Furthermore, one of ordinary skill in the art would specifically recognize that storing “image data” on volatile memory (during execution of processing) would have been obvious to one of ordinary skill in the art, since volatile memory has significantly lower latency then non-volatile memory, thus allowing for faster performance of operations. Therefore, it would have been apparent to one of ordinary skill in the art, before the effective filling date of the claimed invention, to store the image data in at least one volatile memory to execute the determining of the facial area and the producing of the anonymized image data in combining the method of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 11.
20. Regarding Claim 12, the claim language is analogous to claim 1 with the exception of “An imaging engine, the imaging engine comprising one or more processors configured to:”, which is disclosed in Astvatsaturov ([par. 0003, ln. 2-13], [par. 0186, ln. 1-15] see image processors 1006). Rejections analogous to claim 1 are further applicable to the remainder of claim 12.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 12.
21. Regarding Claim 13, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 13 is analogous to claim 2. Rejections analogous to claim 2 are further applicable to the remainder of claim 13 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 13.
22. Regarding Claim 14, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 14 is analogous to claim 3. Rejections analogous to claim 3 are further applicable to the remainder of claim 14 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 14.
23. Regarding Claim 15, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 15 is analogous to claim 4. Rejections analogous to claim 4 are further applicable to the remainder of claim 15 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 15.
24. Regarding Claim 16, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 15. The claim language of claim 16 is analogous to claim 5. Rejections analogous to claim 5 are further applicable to the remainder of claim 16 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph, and to further modify the combination of Astvatsaturov and Joseph with knowledge inherent to one of ordinary skill in the art to exclude performing facial recognition or person recognition on the image data, to obtain the invention as specified in claim 16.
25. Regarding Claim 17, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 15. The claim language of claim 17 is analogous to claim 6. Rejections analogous to claim 6 are further applicable to the remainder of claim 17 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 17.
26. Regarding Claim 19, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 19 is analogous to claim 8. Rejections analogous to claim 8 are further applicable to the remainder of claim 19 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 19.
27. Regarding Claim 21, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 21 is analogous to claim 10. Rejections analogous to claim 10 are further applicable to the remainder of claim 21 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 21.
28. Regarding Claim 22, a combination of Astvatsaturov and Joseph teaches the imaging engine of claim 12. The claim language of claim 22 is analogous to claim 11. Rejections analogous to claim 11 are further applicable to the remainder of claim 22 in view of the imaging engine of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the imaging engine of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 22.
29. Regarding Claim 23, the claim language is analogous to claim 1 with the exception of “An indicia reader, comprising: an imaging assembly configured to capture image data of an environment appearing in a field of view (FOV); a housing including a tower, a platter, and the imaging assembly; and a non-transitory computer-readable media storing machine-readable instructions that, when executed, cause the indicia reader to…”. Astvatsaturov discloses an indicia reader, comprising: an imaging assembly configured to capture image data of an environment appearing in a field of view (FOV) ([Fig. 1], [par. 0098, ln. 1-13] “…cameras 107 and 109 (as well as other cameras described in other examples including those of FIGS. 2-4) may be referred to as image acquisition assemblies and may be implemented as a color camera, monochromatic camera, or other camera configured to obtain images of an object… the camera 107 is within a vertically extending, upper housing 114 (also referred to as an upper portion or tower portion) of the barcode reader 106, and the camera 109 is within a horizontally extending, lower housing 112 (also referred to as a lower portion or platter portion). The upper housing 114 is characterized by a horizontally extending field of view for the camera 107. The lower housing 112 is characterized by a vertically extending field of view for the camera 109…”); a housing including a tower, a platter, and the imaging assembly ([Fig. 1] see upper housing 114 (i.e., tower), lower housing 112 (i.e., platter), and cameras 107 and 109 (i.e., imaging assembly), [par. 0098, ln. 1-13]); and a non-transitory computer-readable media storing machine-readable instructions that, when executed, cause the indicia reader to ([Fig. 7] see memory 330, [par. 0117, ln. 1-7] “FIG. 7 illustrates an example system where embodiments of the present invention may be implemented. In the present example, the environment is provided in the form of a facility having one or more scanning locations 300 corresponding to an imaging system, such as the imaging systems 100, 100′, 100″, 140, 140″ and 200 of 1, 2, 3, 4A, 4B, 5, and 6”, [par. 0125, ln. 8-19] “…memory 330 include any number or type(s) of non-transitory computer-readable storage medium or disk…”). Rejections analogous to claim 1 are further applicable to the remainder of claim 23. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 23.
30. Regarding Claim 25, a combination of Astvatsaturov and Joseph teaches the indicia reader of claim 23. Astvatsaturov discloses causing the indicia reader to: {prior to the produce of the anonymized image data}, attempt to identify an indicia in a non-facial area of the image data ([par. 0172, ln. 1-22] “…determining the first object identification of the object using the 2D image data includes identifying a barcode for the object in the 2D image data and decoding the barcode to generate barcode payload data and determining the first object identification from the barcode payload data… the block 606 may determine that one or more geometric features of the object from the 3D image data are based on a first subset of the 3D image data and not based on a second subset of the 3D image data. The first subset of the 3D image data may be associated with a first subset of the data points having the respective distance value associated with the distance from the 3D imaging apparatus being within a predetermined range, and the second subset of the 3D image data may be associated with a second subset of the data points having the respective distance value associated with the distance from the 3D imaging apparatus being outside of the predetermined range.”); and if the indicia is not identified in the non-facial area, attempt to identify the indicia in an area of the image data including both the non-facial area and the {facial area} ([par. 0174, ln. 1-21] “…block 604 may identify a scannable object using the 3D image data, and if the block 606 fails to determine an object identification of the object using the 2D image data, the block 608 may determine an improper scanning of the object and generating an alarm signal. For example, a 2D image may be captured at the block 602, but that 2D image may not have any barcode visible or only a partial barcode visible, e.g., where an operator covers all or part of the barcode. Upon detecting the presence of an object being scanned over an allowed scanning distance in the FOV of the 3D imaging apparatus, as determined at block 604, the block 606 will attempt to identify the object scanned in the 2D image data and in the 3D image data, i.e., by determining a first and second object identification, respectively. The second object identification may be the physical shape, distance, or product classification (i.e., from a trained object recognition model). If a first object identification cannot be determined, however, (e.g., when no barcode can be properly decoded from the 2D image), the block 608 determines an improper scanning and generates an alarm signal.”). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that when an object barcode is not visible or within a predetermined area (i.e., allowable scanning distance) Astvatsaturov attempts to find said object code in a second object identification as determined by a physical shape, distance or product classification. Astvatsaturov does not specifically disclose this second object identification and/or subset includes a facial area.
However, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that Joseph discloses both a facial area and anonymizing image data ([Fig. 6A-D], [par. 0039, ln. 1-24]). Specifically, one of ordinary skill in the art, in combining the indicia reader of Astvatsaturov with the anonymization of image data in Joseph, would have recognized that if the indicia was not present within a predetermined area analogous to the scanning in Astvatsaturov, to further attempt to find the indicia in the facial area prior to anonymization. This is because if the product/barcode is within the facial region as detected in Joseph, and the facial region is already anonymized, the product and barcode would be rendered unreadable. Furthermore, one of ordinary skill in the art would recognize that in such a case, suspicious activity may be occurring as disclosed in Astvatsaturov ([par. 0007, ln. 1-10] “…2D image data includes identifying the action performed by the operator, and responsive to the action performed by the operator being identified as one of presenting an object within a product-scanning region and presenting the object proximate to the product-scanning region, and detecting a partially covered barcode or fully covered barcode on the object within at least one of the 2D image data and the enhanced 2D image data, generating an alert suitable for signaling a potential theft event.”), and thus another benefit would be to possible theft protection by preventing objects from being hidden behind and/or within the facial area. Therefore, it would have been apparent to one of ordinary skill in the art, before the effective filling date of the claimed invention, to perform an analogous indicia search of Astvatsaturov prior to application of the anonymization of image data in Joseph so as to correctly identify indicia within a image and to further prevent suspicious activity from going undetected.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 26.
31. Regarding Claim 26, a combination of Astvatsaturov and Joseph teaches the indicia reader of claim 23. Astvatsaturov discloses the FOV is a tower FOV extending horizontally from the tower, and the imaging assembly is further configured to capture a platter FOV extending vertically from the platter; and the machine-readable instructions, when executed, further cause the indicia reader to ([Fig. 1], [par. 0098, ln. 1-13], [Fig. 7] see memory 330, [par. 0117, ln. 1-7], [par. 0125, ln. 8-19]), wherein the remainder of the claim is analogous to claim 1 as applied to an image acquired from the platter FOV. Arguments analogous to claim 1 are further applicable to claim 26. Specifically, one of ordinary skill in the art would recognize that the platter image could include a face, and therefore, would have applied anonymization to the platter image data as well. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 26.
32. Regarding Claim 27, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 27 is analogous to claim 2. Rejections analogous to claim 2 are further applicable to claim 27 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 27.
33. Regarding Claim 28, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 28 is analogous to claim 3. Rejections analogous to claim 3 are further applicable to claim 28 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 28.
34. Regarding Claim 29, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 29 is analogous to claim 4. Rejections analogous to claim 4 are further applicable to claim 29 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 29.
35. Regarding Claim 30, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 29. The claim language of claim 30 is analogous to claim 5. Rejections analogous to claim 5 are further applicable to claim 30 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 30.
36. Regarding Claim 31, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 29. The claim language of claim 31 is analogous to claim 6. Rejections analogous to claim 6 are further applicable to claim 31 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 31.
37. Regarding Claim 33, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 33 is analogous to claim 8. Rejections analogous to claim 8 are further applicable to claim 33 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 33.
38. Regarding Claim 35, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 35 is analogous to claim 10. Rejections analogous to claim 10 are further applicable to claim 35 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 35.
39. Regarding Claim 36, a combination of Astvatsaturov and Jospeh teaches the indicia reader of claim 23. The claim language of claim 36 is analogous to claim 11. Rejections analogous to claim 11 are further applicable to claim 36 in view of the indicia reader of Astvatsaturov. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the indicia reader of Astvatsaturov with the anonymization of image data in Joseph to obtain the invention as specified in claim 36.
40. Claims 7, 18, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2021/0374373 to Astvatsaturov in view of U.S. Publication No. 2023/0342487 to Joseph, and further in view of “First Sight: A Human Body Outline Labeling System” by Leung et al (hereinafter Leung).
41. Regarding Claim 7, a combination of Astvatsaturov and Joseph teaches the method of claim 1. Astvatsaturov and Joseph do not specifically disclose wherein the determining the facial area comprises determining an outline of a person within the image data, and determining the facial area based on a predetermined positional relationship of the facial area to the outline. Specifically, Astvatsaturov discloses that a facial data can be detected using edge data ([par. 0180, ln. 1-9]), but does not specify the edge data comprises “an outline” or a predetermined positional relationship of the area to the outline. Joseph teaches determining a facial area based on a predetermined positional relationship between the facial area and the body ([par. 0068, ln. 1-52]), but does not disclose an outline.
However, Leung teaches determining the facial area comprises determining an outline of a person within the image data ([pg. 361, Fig. 3], [pg. 361, col. 1, D. Overview, par. 1, ln. 1-13] “The structure chart of First Sight is shown in Fig. 2. A se, quence of raw intensity pictures is first fed to a segmentation process to segment moving outlines of the moving objects from the stationary foreground and background. Ribbon [6], [7], [8], [26], [28], [29], [32], which is a popular representation for 2D shape, is used as r