DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 03/29/2024 was reviewed and the listed references were noted.
Drawings
The 11-page drawings have been considered and placed on record in the file.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more, and the claimed invention is directed to non-statutory subject matter as follows. The claims recite identifying text in input form images, performing OCR to extract the identified text, converting the text to text image formatted characters, converting the text image formatted characters to pseudo images, providing the pseudo images to a diffusion model for training, and repeating the steps if the diffusion model is not satisfied.
Step 1:
With regard to Step 1, the instant claims are directed to a method, which is among the statutory categories of invention.
Step 2A – Prong 1:
With regard to Step 2A – Prong 1, for example in Claim 1, the limitations of "a) responsive to input form images, identifying text in the input form images; d) converting the text image formatted characters to pseudo images; and f) responsive to outputs of said diffusion model being unsatisfactory, repeating a)-e)", as drafted only involves mental processes or mathematical calculations, such as identifying text on a form image, converting characters to a pseudo image, and repeating steps so the model is satisfied. That is, nothing in the above-described claim elements preclude the steps from practically being performed in the mind or on a piece of paper. If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind or through mathematical calculations, but for the recitation of a generic apparatus components, such as a processor, computer program, or machine-readable media, then it falls within the "mental processes", which include concepts performed in the human mind, including an observation, evaluation, judgement, opinion, or mathematical calculations groupings of the abstract idea. Accordingly, the claim recites an abstract idea.
Step 2A – Prong 2:
The 2019 PEG defines the phrase “integration into a practical application” to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claims do not apply, rely on, or use the judicial exception.
This judicial exception is not integrated into a practical application because the claim only recites the following additional steps "b) performing optical character recognition (OCR) to extract the identified text; c) converting the identified text to text image formatted characters; e) providing said pseudo images to a diffusion model to train said diffusion model", i.e., insignificant extra-solution activity comprising routine and conventional image processing steps. The other additional recited elements in certain other claims are just a non-transitory computer-readable storage medium with a processor, which are generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. Therefore, the claim as a whole, recites an abstract idea.
Step 2B:
Because the claim fails under Step 2A, the claims are further evaluated under Step 2B. The claim herein does not include additional steps that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/steps amount to no more than insignificant extra-solution activities. Mere instructions to apply an exception using generic apparatus component, such as a processor, cannot provide an inventive concept. The claim is not patent eligible. It should be noted that a similar analysis may be performed with respect to independent Claims 6 and 11.
Further, with regard to dependent Claims 2-5, 7-10, and 12-20 viewed individually, these additional steps are under their broadest reasonable interpretation, cover performance of the limitation in the mind and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims’ limitations amount to significantly more than the abstract idea itself. For example, applying style characteristics including font type and font size before generating the image as recited in Claim 3 or defining the pseudo images as grayscale or RGB as recited in Claim 4 are only examples of routine and conventional image processing steps and do not amount to significantly more to consider as inventive steps. Accordingly, Claims 1-20 are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Vamshi et al. (US 20240362939 A1) in view of Yoffe et al. (US 20230298338 A1), Kumari et al. (US 20240185588 A1), and Kansy et al. (US 20230377214 A1).
Regarding Claim 1, Vamshi teaches "A computer-implemented method comprising: a) responsive to input form images, identifying text in the input form images"; (Vamshi, Paras. 2 and 30, teaches the text detection/mining module is configured to receive the input in form of image data wherein the module uses text extraction techniques such as optical character recognition which depends on identifying text and its correctness, i.e., identify text in input form images in response to the input of form images);
"b) performing optical character recognition (OCR) to extract the identified text"; (Vamshi, Para. 30, teaches text extraction based on the input document format to include optical character recognition, i.e., perform OCR to extract identified text);
"c) converting the identified text to text image formatted characters"; (Vamshi, Para. 30, teaches the text detection/mining module determines textual information from the input image by converting the document image into the readable text image to determine text characters for each row of the document, i.e., convert identified text to text image formatted characters being the conversion into the readable text image to determine text characters).
However, Vamshi does not explicitly teach "d) converting the text image formatted characters to pseudo images; e) providing said pseudo images to a diffusion model to train said diffusion model; and f) responsive to outputs of said diffusion model being unsatisfactory, repeating a)-e)”.
In an analogous field of endeavor, Yoffe teaches "d) converting the text image formatted characters to pseudo images"; (Yoffe, Paras. 23-37, teaches the processing circuitry is configured to perform OCR on at least one image of the plurality of images and utilize data derivative of the performing the OCR to train the deep learning model and enable the user to modify the respective image thereby giving rise to one or more synthetic images wherein the synthetic images are utilized to train the deep learning model and wherein the modifying of the respective image includes overlaying or modifying image text, i.e., converting the text image formatted characters to pseudo images being the rise of synthetic images from the modification of the respective image).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi wherein the respective image is a converted identified text to text image formatted characters by including the conversion of text image formatted characters to pseudo images taught by Yoffe. One of ordinary skill in the art would be motivated to combine the references since it improves the training (Yoffe, Para. 29, teaches the motivation of combination to be to improve the training of the deep learning model).
However, the combination of references of Vamshi in view of Yoffe does not explicitly teach “e) providing said pseudo images to a diffusion model to train said diffusion model; and f) responsive to outputs of said diffusion model being unsatisfactory, repeating a)-e)”.
In an analogous field of endeavor, Kumari teaches "e) providing said pseudo images to a diffusion model to train said diffusion model"; (Kumari, Para. 3, teaches an image generation system including a diffusion model may receive an image of the new concept, generate a synthetic image, compare the received image to the synthetic image, and train the diffusion model by updating selected parameters based on the comparison, i.e., provide pseudo images to a diffusion model to train the diffusion model being the input of the synthetic images to train the diffusion model).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi and Yoffe by including the providing of pseudo images to a diffusion model for training taught by Kumari. One of ordinary skill in the art would be motivated to combine the references since it improves performance of the model (Kumari, Para. 51, teaches the motivation of combination to be to improve performance of the diffusion model).
However, the combination of references of Vamshi in view of Yoffe and Kumari does not explicitly teach “and f) responsive to outputs of said diffusion model being unsatisfactory, repeating a)-e)”.
In an analogous field of endeavor, Kansy teaches "and f) responsive to outputs of said diffusion model being unsatisfactory, repeating a)-e)"; (Kansy, FIG. 4 and Para. 72, teaches the training engine determining that the diffusion neural network should continue to be trained using the reconstruction loss until one or more conditions are met and wherein while training for reconstruction continues, training engine repeats steps, i.e., responsive to outputs of diffusion model being unsatisfactory being that conditions are not met and training continues so that steps are repeated).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi, Yoffe, and Kumari wherein the steps comprise identifying text in the input form images, performing OCR to extract the text, converting the text to text image formatted characters, converting the text image formatted characters to pseudo images, and providing the pseudo images to the diffusion model for training by including the repeating of steps if the output of the model is not satisfactory taught by Kansy. One of ordinary skill in the art would be motivated to combine the references since it lowers loss to below a threshold (Kansy, Para. 72, teaches the motivation of combination to be to lower the reconstruction loss to below a threshold).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 11 recites an apparatus with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Vamshi, Yoffe, Kumari, and Kansy references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Vamshi, Yoffe, Kumari, and Kansy references discloses a processor and a memory (for example, see Vamshi, Paragraph 5).
Claims 2, 4, 12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Vamshi in view of Yoffe, Kumari, Kansy, and Zhu et al. (US 20240020863 A1).
Regarding Claim 2, the combination of references of Vamshi in view of Yoffe, Kumari, and Kansy does not explicitly teach “The method of claim 1, further comprising :g) responsive to outputs of said diffusion model being satisfactory: i. performing text image conversion on said pseudo images; ii. converting said pseudo images to text within bounding boxes; iii. identifying text in said bounding boxes; iv. applying font characteristics to said identified text; and v. generating further form images”.
In an analogous field of endeavor, Zhu teaches "The method of claim 1, further comprising: g) responsive to outputs of said diffusion model being satisfactory: i. performing text image conversion on said pseudo images"; (Zhu, Paras. 62 and 86, teaches a synthetic image with described variations randomly assigned is used to train one or more neural networks to further identify text from images and convert into a machine-readable form wherein the untrained neural network is trained until it achieves a desired accuracy, i.e.., when output of model is satisfactory being the network is trained at a desired accuracy the synthetic image is converted to a text image);
"ii. converting said pseudo images to text within bounding boxes"; (Zhu, FIG. 2 and Paras. 62-64, teaches a synthetic image with described variations randomly assigned is used to train one or more neural networks to further identify text from images and convert into a machine-readable form wherein a DNN is used to identify locations of textual information from one or more images and output one or more bounding polygons for each group of text, i.e., convert pseudo images to text within bounding boxes being the conversion of synthetic images to a machine readable text form and outputting bounding polygons for each group of text);
"iii. identifying text in said bounding boxes"; (Zhu, FIG. 2 and Paras. 62-64, teaches a DNN is used to identify locations of textual information from one or more images and output one or more bounding polygons for each group of text, i.e., text is identified in bounding boxes);
"iv. applying font characteristics to said identified text"; (Zhu, FIG. 2 and Paras. 62-64, teaches a synthetic image output varies from an inputted image by differing fonts wherein text identified in images include font and other textual features located in an image using an identification method, i.e., apply font characteristics to identified text by differing fonts);
"and v. generating further form images"; (Zhu, FIG. 3 and Para. 68, teaches using a real image or synthetic image as an original image wherein augmented images are output, i.e., generating further form images).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi, Yoffe, Kumari, and Kansy wherein the model is a diffusion model and the images are form images by including the text image conversion on pseudo images, converting the pseudo images to text in bounding boxes, identifying text in bounding boxes, applying font characteristics to the text, and generating further images taught by Zhu. One of ordinary skill in the art would be motivated to combine the references since it increases the accuracy of the OCDR (Zhu, Para. 64, teaches the motivation of combination to be to increase accuracy for OCDR).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Regarding Claim 4, the combination of references Vamshi in view of Yoffe, Kumari, Kansy, and Zhu teaches "The method of claim 1, wherein the pseudo images are one of grayscale or RGB images"; (Zhu, Para. 61, teaches data interpreted form an image is altered to create a synthetic image output which is generated using a color model including an RGB color model, i.e., pseudo images are RGB images).
The proposed combination as well as the motivation for combining the Vamshi, Yoffe, Kumari, Kansy, and Zhu references presented in the rejection of Claim 2, applies to claim 4. Thus, the method recited in claim 4 is met by Vamshi in view of Yoffe, Kumari, Kansy, and Zhu.
Claim 12 recites an apparatus with elements corresponding to the steps recited in Claim 2. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Vamshi, Yoffe, Kumari, Kansy, and Zhu references, presented in rejection of Claim 2, apply to this claim. Finally, the combination of the Vamshi, Yoffe, Kumari, Kansy, and Zhu references discloses a processor and a memory (for example, see Vamshi, Paragraph 5).
Claim 14 recites an apparatus with elements corresponding to the steps recited in Claim 4. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Vamshi, Yoffe, Kumari, Kansy, and Zhu references, presented in rejection of Claim 4, apply to this claim. Finally, the combination of the Vamshi, Yoffe, Kumari, Kansy, and Zhu references discloses a processor and a memory (for example, see Vamshi, Paragraph 5).
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Vamshi in view of Yoffe, Kumari, Kansy, Zhu, and Zhang et al. (US 20230040974 A1).
Regarding Claim 3, the combination of references of Vamshi in view of Yoffe, Kumari, Kansy, and Zhu does not explicitly teach "The method of claim 2, further comprising applying style characteristics to said identified text; wherein said generated further form images have applied font type and font size, and said style characteristics are added before said further form images are generated".
In an analogous field of endeavor, Zhang teaches "The method of claim 2, further comprising applying style characteristics to said identified text; wherein said generated further form images have applied font type and font size, and said style characteristics are added before said further form images are generated"; (Zhang, FIGS. 6A-6C and Paras. 9-14 and Para. 50, teaches the generating the artificial image data comprises identifying the visual format of the structure data in the one or more image portions and generating one or more artificial image portions corresponding to the one or more image portions based on the visual format of the structured data and modifying the image data to replace the image portions with artificial image portions in which different visual formats at different image portions can be taken into consideration for obscuring the sensitive data wherein the visual format comprises one or more of text length, text font, text color, the size of the text, and any special style characteristics, i.e., apply style characteristics to text wherein the generated further images being the generation of artificial image data have applied font type and size and style characteristics are added before further images are generated.
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi, Yoffe, Kumari, Kansy, and Zhu by including the application of font type, size, and style characteristics are added before the form images are generated taught by Zhang. One of ordinary skill in the art would be motivated to combine the references since it reduces processing requirements (Zhang, Para. 9, teaches the motivation of combination to be to reduce processing requirements and replace sensitive data).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 13 recites an apparatus with elements corresponding to the steps recited in Claim 3. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Vamshi, Yoffe, Kumari, Kansy, Zhu, and Zhang references, presented in rejection of Claim 3, apply to this claim. Finally, the combination of the Vamshi, Yoffe, Kumari, Kansy, Zhu, and Zhang references discloses a processor and a memory (for example, see Vamshi, Paragraph 5).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Vamshi in view of Yoffe, Kumari, Kansy, Zhu, Shukla et al. (US 20240320820 A1), and Wen (US 20170169548 A1).
Regarding Claim 5, the combination of references of Vamshi in view of Yoffe, Kumari, Kansy, and Zhu does not explicitly teach "The method of claim 4, wherein each of said pseudo images comprises a plurality of grayscale pseudo pixels; wherein a grayscale value Ri of each of said plurality of pseudo pixels are obtained according to the following: Ri = Ni / M * 255 where Ri = Grayscale Pseudo Pixel Value M= Number of entries in a lookup table comprising an alphabet Ni = Numerical Value of entry in said lookup table”.
In an analogous field of endeavor, Shukla teaches "The method of claim 4, wherein each of said pseudo images comprises a plurality of grayscale pseudo pixels"; (Shukla, Para. 65, teaches a grayscale is applied on the set of synthetic images to obtain a set of grayscale images which includes setting the color of each pixel in the synthetic image to a monochromatic format, i.e., pseudo images comprise a plurality of grayscale pseudo pixels being the converted synthetic pixels to grayscale).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi, Yoffe, Kumari, Kansy, and Zhu by including the pseudo image comprising a plurality of grayscale pseudo pixels taught by Shukla. One of ordinary skill in the art would be motivated to combine the references since it improves analysis of the images.
However, the combination of references of Vamshi in view of Yoffe, Kumari, Kansy, Zhu, and Shukla does not explicitly teach "wherein a grayscale value Ri of each of said plurality of pseudo pixels are obtained according to the following: Ri = Ni / M * 255 where Ri = Grayscale Pseudo Pixel Value M= Number of entries in a lookup table comprising an alphabet Ni = Numerical Value of entry in said lookup table".
In an analogous field of endeavor, Wen teaches "wherein a grayscale value Ri of each of said plurality of pseudo pixels are obtained according to the following: Ri = Ni / M * 255 where Ri = Grayscale Pseudo Pixel Value M= Number of entries in a lookup table comprising an alphabet Ni = Numerical Value of entry in said lookup table"; (Wen, Paras. 18-33, teaches performing maximum normalization of the cumulative calculation of the histogram with the max cumulative count C(255) and multiplying this maximum normalization by 255 to obtain an enhancement gray scale table out with calculation and looking up the table to obtain a new output gray scale value, i.e., grayscale value of pixels is output from a lookup table by multiplying a count of entries divided by its max entries by 255).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Vamshi, Yoffe, Kumari, Kansy, Zhu, and Shukla wherein pixels are pseudo pixels by including the determination of grayscale value of each pixel according to a normalization of the number of an entry in a table divided by the max number of entries in the table multiplied by 255 taught by Wen. One of ordinary skill in the art would be motivated to combine the references since it adjusts the gray scale distribution (Wen, Para. 3, teaches the motivation of combination to be to adjust the gray scale distribution of the image and increase distribution range).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 15 recites an apparatus with elements corresponding to the steps recited in Claim 5. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Vamshi, Yoffe, Kumari, Kansy, Zhu, Shukla, and Wen references, presented in rejection of Claim 5, apply to this claim. Finally, the combination of the Vamshi, Yoffe, Kumari, Kansy, Zhu, Shukla, and Wen references discloses a processor and a memory (for example, see Vamshi, Paragraph 5).
Claims 6, 9, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Kumari.
Regarding Claim 6, Zhu teaches "A computer-implemented method comprising: a) performing text image conversion on pseudo images output from a (Zhu, FIG. 3 and Paras. 62 and 86, teaches a synthetic image with described variations randomly assigned is used to train one or more neural networks to further identify text from images and convert into a machine-readable form wherein the untrained neural network is trained until it achieves a desired accuracy, i.e., the synthetic image output from a trained model is converted to a text image);
"b) converting said pseudo images to text within bounding boxes"; (Zhu, FIG. 2 and Paras. 62-64, teaches a synthetic image with described variations randomly assigned is used to train one or more neural networks to further identify text from images and convert into a machine-readable form wherein a DNN is used to identify locations of textual information from one or more images and output one or more bounding polygons for each group of text, i.e., convert pseudo images to text within bounding boxes being the conversion of synthetic images to a machine readable text form and outputting bounding polygons for each group of text);
"c) identifying text in said bounding boxes"; (Zhu, FIG. 2 and Paras. 62-64, teaches a DNN is used to identify locations of textual information from one or more images and output one or more bounding polygons for each group of text, i.e., text is identified in bounding boxes);
"and d) generating further form images"; (Zhu, FIG. 3 and Para. 68, teaches using a real image or synthetic image as an original image wherein augmented images are output, i.e., generating further form images).
However, Zhu does not explicitly teach "diffusion model".
In an analogous field of endeavor, Kumari teaches "diffusion model"; (Kumari, Para. 3, teaches an image generation system including a diffusion model may receive an image of the new concept and generate a synthetic image, i.e., pseudo images output from a diffusion model).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Zhu wherein text image conversion is performed on pseudo images that are output from a model by including the pseudo images being output from a diffusion model taught by Kumari. One of ordinary skill in the art would be motivated to combine the references since it improves performance (Kumari, Para. 51, teaches the motivation of combination to be to improve performance of the diffusion model).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Regarding Claim 9, the combination of references of Zhu in view of Kumari teaches "The method of claim 6, wherein the pseudo images are one of grayscale or RGB images";(Zhu, Para. 61, teaches data interpreted form an image is altered to create a synthetic image output which is generated using a color model including an RGB color model, i.e., pseudo images are RGB images).
Claim 16 recites an apparatus with elements corresponding to the steps recited in Claim 6. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Zhu and Kumari references, presented in rejection of Claim 6, apply to this claim. Finally, the combination of the Zhu and Kumari references discloses a processor and a memory (for example, see Zhu, Paragraph 75).
Claim 19 recites an apparatus with elements corresponding to the steps recited in Claim 9. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Zhu and Kumari references, presented in rejection of Claim 9, apply to this claim. Finally, the combination of the Zhu and Kumari references discloses a processor and a memory (for example, see Zhu, Paragraph 75).
Claims 7, 8, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Kumari and Zhang.
Regarding Claim 7, the combination of references of Zhu in view of Kumari does not explicitly teach "The method of claim 6, further comprising applying font characteristics to said identified text".
In an analogous field of endeavor, Zhang teaches "The method of claim 6, further comprising applying font characteristics to said identified text"; (Zhang, FIGS. 6A-6C and Paras. 9-14 and Para. 50, teaches the generating the artificial image data comprises identifying the visual format of the structure data in the one or more image portions and generating one or more artificial image portions corresponding to the one or more image portions based on the visual format of the structured data and modifying the image data to replace the image portions with artificial image portions in which different visual formats at different image portions can be taken into consideration for obscuring the sensitive data wherein the visual format comprises one or more of text length, text font, text color, the size of the text, and any special style characteristics, i.e., apply font style characteristics to identified text ).
The proposed combination as well as the motivation for combining the Vamshi, Yoffe, Kumari, Kansy, Zhu, and Zhang references presented in the rejection of Claim 3, applies to claim 7. Thus, the method recited in claim 7 is met by Zhu in view of Kumari and Zhang.
Regarding Claim 8, the combination of references of Zhu in view of Kumari and Zhang teaches "The method of claim 7, further comprising applying style characteristics to said identified text, wherein said generated further form images have applied font type and font size, and said style characteristics are added before said further form images are generated"; (Zhang, FIGS. 6A-6C and Paras. 9-14 and Para. 50, teaches the generating the artificial image data comprises identifying the visual format of the structure data in the one or more image portions and generating one or more artificial image portions corresponding to the one or more image portions based on the visual format of the structured data and modifying the image data to replace the image portions with artificial image portions in which different visual formats at different image portions can be taken into consideration for obscuring the sensitive data wherein the visual format comprises one or more of text length, text font, text color, the size of the text, and any special style characteristics, i.e., apply style characteristics to text wherein the generated further images being the generation of artificial image data have applied font type and size and style characteristics are added before further images are generated).
The proposed combination as well as the motivation for combining the Vamshi, Yoffe, Kumari, Kansy, Zhu, and Zhang references presented in the rejection of Claim 3, applies to claim 8. Thus, the method recited in claim 8 is met by Zhu in view of Kumari and Zhang.
Claim 17 recites an apparatus with elements corresponding to the steps recited in Claim 7. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Zhu, Kumari, and Zhang references, presented in rejection of Claim 7, apply to this claim. Finally, the combination of the Zhu, Kumari, and Zhang references discloses a processor and a memory (for example, see Zhu, Paragraph 75).
Claim 18 recites an apparatus with elements corresponding to the steps recited in Claim 8. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Zhu, Kumari, and Zhang references, presented in rejection of Claim 8, apply to this claim. Finally, the combination of the Zhu, Kumari, and Zhang references discloses a processor and a memory (for example, see Zhu, Paragraph 75).
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Kumari, Shukla, Wen, and Yang et al. (US 20210019352 A1).
Regarding Claim 10, the combination of references of Zhu in view of Kumari does not explicitly teach "The method of claim 9, wherein in said converting, said text comprises text characters from a lookup table having a numerical value for each of said text characters, the pseudo images comprising pseudo pixels each having a gray scale value, said text characters being determined according to the following: Ni = Ri / 255 * M where Ni = Numerical Value of a character in a lookup table M = Total size of said lookup table Ri = Grayscale Pseudo Pixel Value”.
In an analogous field of endeavor, Yang teaches "The method of claim 9, wherein in said converting, said text comprises text characters from a lookup table having a numerical value for each of said text characters"; (Yang, Paras. 33 and 41, teaches a lookup table includes an index value and the different characters respectively corresponding to each of the index values wherein the index value is an integer from zero to an integer to less than the predetermined value, i.e., text comprises text characters from a lookup table having a numerical value for each of said text characters).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Zhu and Kumari wherein the converting is converting pseudo image to text by including the text comprising text characters from a lookup table having a numerical value for each character taught by Yang. One of ordinary skill in the art would be motivated to combine the references since it provides a product capable of maintaining privacy (Yang, Para. 2, teaches the motivation of combination to be to provide a text comparison method and product capable of maintaining privacy).
However, the combination of references of Zhu in view of Kumari and Yang does not explicitly teach “the pseudo images comprising pseudo pixels each having a gray scale value, said text characters being determined according to the following: Ni = Ri / 255 * M where Ni = Numerical Value of a character in a lookup table M = Total size of said lookup table Ri = Grayscale Pseudo Pixel Value”.
In an analogous field of endeavor, Shukla teaches "the pseudo images comprising pseudo pixels each having a gray scale value"; (Shukla, Para. 65, teaches a grayscale is applied on the set of synthetic images to obtain a set of grayscale images which includes setting the color of each pixel in the synthetic image to a monochromatic format, i.e., pseudo images comprise a plurality of grayscale pseudo pixels being the converted synthetic pixels to grayscale).
The proposed combination as well as the motivation for combining the Vamshi, Yoffe, Kumari, Kansy, Zhu, Shukla, and Wen references presented in the rejection of Claim 5, applies to claim 10.
However, the combination of references of Zhu in view of Kumari, Yang, and Shukla does not explicitly teach "said text characters being determined according to the following: Ni = Ri / 255 * M where Ni = Numerical Value of a character in a lookup table M = Total size of said lookup table Ri = Grayscale Pseudo Pixel Value".
In an analogous field of endeavor, Wen teaches "said text characters being determined according to the following: Ni = Ri / 255 * M where Ni = Numerical Value of a character in a lookup table M = Total size of said lookup table Ri = Grayscale Pseudo Pixel Value"; (Wen, Paras. 18-33, teaches performing maximum normalization of the cumulative calculation of the histogram with the max cumulative count C(255) and multiplying this maximum normalization by 255 to obtain an enhancement gray scale table out with calculation and looking up the table to obtain a new output gray scale value, i.e., cumulative count of entries is equal to a grayscale output from the lookup table divided by 255 and multiplied by the maximum cumulative).
The proposed combination as well as the motivation for combining the Vamshi, Yoffe, Kumari, Kansy, Zhu, Shukla, and Wen references presented in the rejection of Claim 5, applies to claim 10. Thus, the method recited in claim 10 is met by Zhu in view of Kumari, Yang, Shukla, and Wen.
Claim 20 recites an apparatus with elements corresponding to the steps recited in Claim 10. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Zhu, Kumari, Yang, Shukla, and Wen references, presented in rejection of Claim 10, apply to this claim. Finally, the combination of the Zhu, Kumari, Yang, Shukla, and Wen references discloses a processor and a memory (for example, see Zhu, Paragraph 75).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW STEVEN BUDISALICH whose telephone number is (703)756-5568. The examiner can normally be reached Monday - Friday 8:30am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW S BUDISALICH/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662