DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 12 , filed 3/2/2026, with respect to 35 U.S.C. 101 have been fully considered and are persuasive. The rejection of claim has been withdrawn in light of applicant’s amendments.
Applicant's arguments filed 3/2/2026 with respect to 35 U.S.C. 103 have been fully considered but they are not persuasive.
Applicant argues:
The Office Action asserts that paragraph [0023] of Mozer teaches the above claimed
elements because paragraph [0023] of Mozer states, "In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII. For example, in the case where a sequence of numbers matching a template of the form ###-##4-#### is found, P 11 sanitizing module 202 can conclude that this sequence likely represents a phone number. Further, in the case where a sequence of characters and symbols matching the template *@*.* is found, P11 sanitizing module 202 can conclude that this sequence likely represents an email address" note that optical character recognition to detect P11 [i.e. entity recognition] is understood be performed on the visual data [i.e. the image or video] as a whole this would include OCR to look for P11 in all of the regions of the image)". That is, the Office Action appears to assert that Mozer's region including "sequences of numbers, letters, and/or symbols" allegedly corresponds to the 'entity content region' of claim 3.
However, Applicant respectfully submits that Mozer does not disclose or suggest "obtaining..,a non-entity content region", even if Mozer teaches a feature of obtaining an entity content region by using a template described in paragraph [0023] of Mozer. Paragraph [0023] of Mozer fails to teach an operation of obtaining non-entity content regions. Other portions of Mozer also fail to teach such operation. For example, FIG. 3 of Mozer shows the steps 310 ("REMOVE ... RESULTING IN SANITIZED VISUAL DATA SAMPLE") and 312 ("OUTPUT SANITIZED VISUAL DATA SAMPLE"). However, Mozer fails to teach that those steps include an operation of obtaining non-entity content regions.
The examiner notes that Mozer performs OCR on the character reason and identifies those charter regions which correspond to “personally identifiable information” such as phone numbers and email address (see paragraph 23) this corresponds to the entity content region of the claims. The examiner notes that this necessarily discriminating between which text regions are “entity content” and which regions are not. The non entity content region are clearly the remaining region for which entity content was not determined. By labeling certain regions as containing PII the remain regions have effectively be labeled as not PII. This is further made clear in paragraph 27 in which the PII regions are obfuscated in the image. The effective draws a distinction between entity content regions and non-entity content regions because entity content regions will be obfuscated and non-entity content regions will not.
Applicants remaining arguments with respect to 35 U.S.C. 103 rely on the above arguments and are therefore also unpersuasive.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 4, 5, 12, 15, 16 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mozer US 2023/0229803.
Re claim 1 Moser discloses A picture processing method performed by an electronic device, the picture processing method comprising:
obtaining a target picture; (see paragraph 21 22 note that visual data comprising and image or video)
performing region type detection and character recognition on the target picture by combining a target type detection technology (see paragraph 22 “ For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person” note that PII regions including faces are detected ) with an optical character recognition technology (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that regions which contain text which is PII are determined);
determining a target region to be occluded from the target picture based on a result of the region type detection and a result of the character recognition (see paragraphs 23-25 note that regions containing PII which are to be sanitized [occluded] are determined);
and occluding the target region in the target picture (see paragraph 25 PII sanitizing module 202 can sanitize (i.e., remove, obfuscate, or transform) the identified PII, thereby converting the visual data sample into a sanitized/identity-neutral form” note that PII may be sanitized and removed).
wherein the performing the region type detection and the character recognition on the target picture by combining the target type detection technology with the optical character recognition technology comprises: obtaining a first type region and a second type region by performing region type detection on the target picture using the target type detection technology (see paragraph 22 “In one set of embodiments, the identification performed at step 304 can involve using an ML model (e.g., neural network, decision tree, support vector machine, etc.) that reads pixel values of the visual data sample and outputs region proposals (e.g., bounding boxes or segmentation maps) indicating regions in the visual data sample that are likely to contain PII of a given type. For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person. And for a region R2 with pixel values that the ML model has determined are indicative of a street sign or some other location indicator, the ML model may output a region proposal indicating that R2 is likely to contain that street sign/location indicator.” Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII); and obtaining a character region by performing character recognition on the target picture or the second type region using the optical character recognition technology ( see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that character recognition is performed on the image or video), and wherein the first type region is a region type to be occluded, and the second type region is a region type not to be occluded (see paragraph 22 Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII see also paragraph 24 and 25 note that the images containing PII are regions to occluded other regions are not occluded ).
wherein the determining the target region to be occluded from the target picture based on the result of the region type detection and the result of the character recognition comprises: obtaining an entity content region and a non-entity content region by performing entity recognition based on the character region (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that PII corresponds to the entity content within the text i.e. personally identifying information then the PII text is obfuscated see paragraph 27 the non-obfuscated text corresponds to the “non entity region” The examiner notes when identifying text containing PII, any text not identified corresponds to “non entity text”); and determining the first type region and the entity content region as the target region to be occluded (see paragraph 23-27 note that biological features PII and text based PII are both occluded).
Re claim 4 Mozer discloses wherein the obtaining the entity content region and the non-entity content region by performing the entity recognition based on the character region comprises: based on the character region being obtained by performing the character recognition on the second type region using the optical character recognition technology, obtaining the entity content region and the non-entity content region by performing the entity recognition on the character region (see paragraph 23 “in addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII. For example, in the case where a sequence of numbers matching a template of the form ###-###-#### is found, PII sanitizing module 202 can conclude that this sequence likely represents a phone number. Further, in the case where a sequence of characters and symbols matching the template *@*.* is found, PII sanitizing module 202 can conclude that this sequence likely represents an email address” note that optical character recognition to detect PII [i.e. entity recognition] is understood be performed on the visual data [i.e the image or video] as a whole this would include OCR to look for PII in all of the regions of the image including the second region ).
Re claim 5 wherein the obtaining the entity content region and the non-entity content region by performing the entity recognition based on the character region comprises: based on the character region being obtained by performing the character recognition on the target picture using the optical character recognition technology, obtaining the entity content region and the non-entity content region by performing the entity recognition on the first type region, the second type region, and the character region. (see paragraph 23 “in addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII. For example, in the case where a sequence of numbers matching a template of the form ###-###-#### is found, PII sanitizing module 202 can conclude that this sequence likely represents a phone number. Further, in the case where a sequence of characters and symbols matching the template *@*.* is found, PII sanitizing module 202 can conclude that this sequence likely represents an email address” note that optical character recognition to detect PII [i.e. entity recognition] is understood be performed on the visual data [i.e. the image or video] as a whole this would include OCR to look for PII in all of the regions of the image ).
Re claim 12 Mozer discloses an electronic device comprising: at least one processor; and a memory storing instructions, wherein the at least one processor is configured to execute the instructions to: (see paragraph 41 and 46 note that the electronic devise compresses a processor and memory storing code to execute the invention)
obtaining a target picture; (see paragraph 21 22 note that visual data comprising and image or video )
perform region type detection and character recognition on the target picture by combining a target type detection technology (see paragraph 22 “ For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person” note that PII regions including faces are detected ) with an optical character recognition technology (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that regions which contain text which is PII are determined);
determine a target region to be occluded from the target picture based on a result of the region type detection and a result of the character recognition (see paragraphs 23-25 note that regions containing PII which are to be sanitized [occluded] are determined);
and occlude the target region in the target picture (see paragraph 25 PII sanitizing module 202 can sanitize (i.e., remove, obfuscate, or transform) the identified PII, thereby converting the visual data sample into a sanitized/identity-neutral form “note that PII may be sanitized and removed).
obtaining a first type region and a second type region by performing region type detection on the target picture using the target type detection technology (see paragraph 22 “In one set of embodiments, the identification performed at step 304 can involve using an ML model (e.g., neural network, decision tree, support vector machine, etc.) that reads pixel values of the visual data sample and outputs region proposals (e.g., bounding boxes or segmentation maps) indicating regions in the visual data sample that are likely to contain PII of a given type. For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person. And for a region R2 with pixel values that the ML model has determined are indicative of a street sign or some other location indicator, the ML model may output a region proposal indicating that R2 is likely to contain that street sign/location indicator.” Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII); and obtaining a character region by performing character recognition on the target picture or the second type region using the optical character recognition technology ( see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that character recognition is performed on the image or video), and wherein the first type region is a region type to be occluded, and the second type region is a region type not to be occluded (see paragraph 22 Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other then the PII that is regions that do not contain PII see also paragraph 24 and 25 note that the images containing PII are regions to occluded other regions are not occluded )
obtaining an entity content region and a non-entity content region by performing entity recognition based on the character region (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that PII corresponds to the entity content withing the text); and determining the first type region and the entity content region as the target region to be occluded (see paragraph 23-27 note that biological features PII and text based PII are both occluded).
Re claim 15 Mozer discloses based on the character region being obtained by performing the character recognition on the second type region using the optical character recognition technology, obtaining the entity content region and the non-entity content region by performing the entity recognition on the character region (see paragraph 23 “in addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII. For example, in the case where a sequence of numbers matching a template of the form ###-###-#### is found, PII sanitizing module 202 can conclude that this sequence likely represents a phone number. Further, in the case where a sequence of characters and symbols matching the template *@*.* is found, PII sanitizing module 202 can conclude that this sequence likely represents an email address” note that optical character recognition to detect PII [i.e. entity recognition] is understood be performed on the visual data [i.e. the image or video] as a whole this would include OCR to look for PII in all of the regions of the image including the second region ).
Re claim 16 Mozer discloses wherein the obtaining the entity content region and the non-entity content region by performing the entity recognition based on the character region comprises: based on the character region being obtained by performing the character recognition on the target picture using the optical character recognition technology, obtaining the entity content region and the non-entity content region by performing the entity recognition on the first type region, the second type region, and the character region. (see paragraph 23 “in addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII. For example, in the case where a sequence of numbers matching a template of the form ###-###-#### is found, PII sanitizing module 202 can conclude that this sequence likely represents a phone number. Further, in the case where a sequence of characters and symbols matching the template *@*.* is found, PII sanitizing module 202 can conclude that this sequence likely represents an email address” note that optical character recognition to detect PII [i.e. entity recognition] is understood be performed on the visual data [i.e. the image or video] as a whole this would include OCR to look for PII in all of the regions of the image ).
Re claim 20 Mozer discloses A non-transitory computer readable storage medium storing instructions which are executed by a processor of an electronic device to perform a picture processing method comprising: (see paragraph 41 and 46 note that the electronic devise compresses a processor and memory storing code to execute the invention)
obtaining a target picture; (see paragraph 21 22 note that visual data comprising and image or video)
performing region type detection and character recognition on the target picture by combining a target type detection technology (see paragraph 22 “ For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person” note that PII regions including faces are detected ) with an optical character recognition technology (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that regions which contain text which is PII are determined);
determining a target region to be occluded from the target picture based on a result of the region type detection and a result of the character recognition (see paragraphs 23-25 note that regions containing PII which are to be sanitized [occluded] are determined);
and occluding the target region in the target picture (see paragraph 25PII sanitizing module 202 can sanitize (i.e., remove, obfuscate, or transform) the identified PII, thereby converting the visual data sample into a sanitized/identity-neutral form “note that PII may be sanitized and removed).
wherein the performing the region type detection and the character recognition on the target picture by combining the target type detection technology with the optical character recognition technology comprises: obtaining a first type region and a second type region by performing region type detection on the target picture using the target type detection technology (see paragraph 22 “In one set of embodiments, the identification performed at step 304 can involve using an ML model (e.g., neural network, decision tree, support vector machine, etc.) that reads pixel values of the visual data sample and outputs region proposals (e.g., bounding boxes or segmentation maps) indicating regions in the visual data sample that are likely to contain PII of a given type. For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person. And for a region R2 with pixel values that the ML model has determined are indicative of a street sign or some other location indicator, the ML model may output a region proposal indicating that R2 is likely to contain that street sign/location indicator.” Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII); and obtaining a character region by performing character recognition on the target picture or the second type region using the optical character recognition technology ( see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that character recognition is performed on the image or video), and wherein the first type region is a region type to be occluded, and the second type region is a region type not to be occluded (see paragraph 22 Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII see also paragraph 24 and 25 note that the images containing PII are regions to occluded other regions are not occluded ).
wherein the determining the target region to be occluded from the target picture based on the result of the region type detection and the result of the character recognition comprises: obtaining an entity content region and a non-entity content region by performing entity recognition based on the character region (see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that PII corresponds to the entity content within the text i.e. personally identifying information then the PII text is obfuscated see paragraph 27 the non-obfuscated text corresponds to the “non entity region” The examiner notes when identifying text containing PII, any text not identified corresponds to “non entity text”); and determining the first type region and the entity content region as the target region to be occluded (see paragraph 23-27 note that biological features PII and text based PII are both occluded).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 10 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mozer US 2023/0229803 in view of Kamata US 2017/0124347.
Re claim 10 Mozer discloses a picture after the target region is occluded (see paragraph 23-27 note that a picture with an occluded regions is generated) Mozer does not expressly disclose displaying an effect picture of the target picture before and after the target region is occluded based on one of a plurality of preset display methods, wherein the plurality of preset display methods comprises an animation mode, a split screen mode, and a touch mode. Kamta discloses displaying an effect picture of the target picture before and after the target region is occluded based on one of a plurality of preset display methods, wherein the plurality of preset display methods comprises an animation mode, a split screen mode, and a touch mode. (see paragraph 102 note that an image before and after occlusion may be displayed side by side [split screen] to allow the user to compare the results). The motivation is to allow the user to see the results of the redaction (see paragraph 102). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention combine Mozer and Kamata to reach the aforementioned advantage.
Re claim 19 Mozer discloses a picture after the target region is occluded (see paragraph 23-27 note that a picture with an occluded regions is generated) Mozer does not expressly disclose display an effect picture of the target picture before and after the target region is occluded based on one of a plurality of preset display methods, wherein the plurality of preset display methods comprise an animation mode, a split screen mode, and a touch mode. Kamta discloses displaying an effect picture of the target picture before and after the target region is occluded based on one of a plurality of preset display methods, wherein the plurality of preset display methods comprise an animation mode, a split screen mode, and a touch mode. (see paragraph 102 note that an image before and after occlusion may be displayed side by side [split screen] to allow the user to compare the results). The motivation is to allow the user to see the results of the redaction (see paragraph 102). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention combine Mozer and Kamata to reach the aforementioned advantage.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mozer US 2023/0229803 in view of Kamata US 2017/0124347 in further view of Azuma US 2010/0174192.
Mozer and Kamata disclose all the elements of claims 10 and Kamata further discloses wherein the displaying the effect picture of the target picture before and after the target region is occluded according to one of the plurality of preset display methods comprises displaying the effect picture by selecting one preset display method (see paragraph 102 note that an image before and after occlusion may be displayed side by side [split screen] to allow the user to compare the results). The do not expressly disclose selecting one preset display method that matches a type of the electronic device from the plurality of preset display methods based on the type of the electronic device. Azuma further discloses selecting one preset display method that matches a type of the electronic device from the plurality of preset display methods based on the type of the electronic device (see paragraph 48 note that images may be displayed side by side [split screen] or alternately displayed [animation] depending upon the screen size of the device on which they are displayed). The motivation to combine is to display the images on a smaller screen (see paragraph 48). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention combine Azuma with Mozer and Kamata to reach the aforementioned advantage.
Allowable Subject Matter
Claim 6-9, 17 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 6 and 17 Mozer discloses in paragraph 22 “In one set of embodiments, the identification performed at step 304 can involve using an ML model (e.g., neural network, decision tree, support vector machine, etc.) that reads pixel values of the visual data sample and outputs region proposals (e.g., bounding boxes or segmentation maps) indicating regions in the visual data sample that are likely to contain PII of a given type. For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person. And for a region R2 with pixel values that the ML model has determined are indicative of a street sign or some other location indicator, the ML model may output a region proposal indicating that R2 is likely to contain that street sign/location indicator.” Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII). Further see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that PII corresponds to the entity content within the text.
The prior art of record does not expressly disclose the combination of wherein the obtaining the entity content region and the non-entity content region by performing the entity recognition based on the first type region, the second type region, and the character region comprises: determining an intersection over union between the character region and the first type region; determining an intersection over union between the character region and the second type region; determining a first character region having an intersection over union with the first type region that is greater than or equal to a preset threshold in the character region; and obtaining the entity content region and the non-entity content region by performing the entity recognition based on the second type region and a second character region in the character region other than the first character region.
Claims 7 and 8 depend from claim 6. Claim 18 depends from claims 17
Regarding claim 9 Mozer discloses in paragraph 22 “In one set of embodiments, the identification performed at step 304 can involve using an ML model (e.g., neural network, decision tree, support vector machine, etc.) that reads pixel values of the visual data sample and outputs region proposals (e.g., bounding boxes or segmentation maps) indicating regions in the visual data sample that are likely to contain PII of a given type. For example, for a region R1 with pixel values that the ML model has determined are indicative of eyes, nose, and/or mouth features, the ML model may output a region proposal indicating that R1 is likely to contain a face belonging to a person or a depiction of a person. And for a region R2 with pixel values that the ML model has determined are indicative of a street sign or some other location indicator, the ML model may output a region proposal indicating that R2 is likely to contain that street sign/location indicator.” Note that regions containing PII in the image or video are determined this corresponds to the first region, the second region corresponds to the regions of the image other than the PII that is regions that do not contain PII). Further see paragraph 23 “In addition to (or in lieu of) the ML model above, the identification performed at step 304 can involve using optical character recognition (OCR) to recognize sequences of numbers, letters, and/or symbols in the visual data sample. These numbers, letters, or symbols can then be processed via a sequence template matching system or language model to identify text sequences or phrases which are known to constitute, or be revealing of, PII” note that PII corresponds to the entity content within the text.
The prior art of record does not expressly disclose wherein the character region comprises at least one word, wherein each word of the at least one word of the character region corresponds to a line number, wherein entity words comprised in the entity content region have a same entity number, and wherein the determining the entity content region as the target region to be occluded comprises: dividing the character region into a plurality of sub-regions based on the line number corresponding to each word of the character region, the entity content region and the non-entity content region; determining at least one sub-region comprising the entity words with a same entity number in the plurality of sub-regions; determining a type of entity content comprised in each of the at least one sub- region; and determining a sub-region having a type of entity content that is a preset privacy type of the at least one sub-region, as the target region to be occluded.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T MOTSINGER whose telephone number is (571)270-1237. The examiner can normally be reached 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN T MOTSINGER/Primary Examiner, Art Unit 2673