DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/08/2024 and 09/11/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Specification
1 The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
2 Claims 8-24 are objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim(s) within Claims 8-24, wherein Claim 5 is already listed as a multi dependent claim before those claims. See MPEP § 608.01(n). Accordingly, the claims 8-24 are not been further treated on the merits.
Claim Rejections - 35 USC § 112
3 The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
4 Claim(s) 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ),
second paragraph, as being indefinite for failing to particularly point out and distinctly claim the
subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35
U.S.C. 112, the applicant), regards as the invention.
5 Claim 1 recites the limitation “…computing a parameter λij…” (without giving an explanation as to what the parameter is specifically). There is indefinite language located in this claim.
6 Claims 2-7 are rejected due to being dependent on Claim 1 which is indefinite.
Claim Rejections - 35 USC § 103
7 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9 Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1).
10 Regarding claim 1, Li teaches a process for applying style images Isj, to a content image Ic containing entity classes i (i: 1, 2,...M) ([Section 3] reciting “Our photorealistic image stylization algorithm consists of two steps as illustrated in Figure 2. The first step is a stylization transform F1 called PhotoWCT. Given a style photo IS, F1 transfer the style of IS to the content photo IC while minimizing structural artifacts in the output image.”; [Section 3.1] reciting “The matrices EC and ES are the corresponding orthonormal matrices of the eigenvectors, respectively. After the transformation, the correlations of transformed features match those of the style features…”) comprising the steps of:
providing a plurality of j style images (Is,: Is1, Is2 ...... Isn), each containing entity classes i (i: 1, 2,...M) ([Section 4] reciting “When performing PhotoWCT stylization, for each semantic label, we compute a pair of projection matrices PC and PS using the features from the image regions with the same label in the content and style photos, respectively. The pair is then used to stylize these image regions.”; [Section 3.1] reciting “The matrices EC and ES are the corresponding orthonormal matrices of the eigenvectors, respectively. After the transformation, the correlations of transformed features match those of the style features…”)
for each entity class i,
for each style image Isj, of the plurality of style images (Is,: Is1, Is2 ...... Isn), computing a parameter λij, representing the similarity between each style image Isj, and the content image Ic; ([Section 3.2] reciting “The PhotoWCT-stylized result (Figure 4(d)) still looks less like a photo since semantically similar regions are often stylized inconsistently. As shown in Figure 4, when applying the PhotoWCT to stylize the day-time photo using the night-time photo, the stylized sky region would be more photorealistic if it were uniformly dark blue instead of partly dark and partly light blue. It is based on this observation, we employ the pixel affinities in the content photo to smooth the PhotoWCT-stylized result.”; [Section 4] reciting “We use the similarity between the boundary maps extracted from stylized and original content photos as the criteria since object boundaries should remain the same despite the stylization [44].”)
stylising the content image Ic, by applying the selected style image Isw to the content image Ic, to generate a stylised content image Ics, ([Section 3.2] reciting “The stylization output generated by the PhotoWCT better preserves local structures in the content images, which is important for the image smoothing step…”)
11 Li does not explicitly teach selecting, from the plurality of style images Isj, the style image Isw with the parameter λIW representing the highest said similarity.
12 Tomar teaches selecting, from the plurality of style images Isj, the style image Isw with the parameter λIW representing the highest said similarity ([0029] reciting “In some embodiments and as described herein, an “image style” or “image effect” typically refers to the manner in which the content of images are generated or styled, as opposed to the content itself.”; [0046] reciting “The scoring component 107 is generally responsible for generating a similarity score for each predetermined image style, which is indicative of a measure of similarity between the one or more features extracted by the image style extracting component 105 and each predetermined image style…Accordingly, particular embodiments would score the first predetermined image style the highest, followed by lower scores directly proportional to the distance between feature vectors.”).
13 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li) to incorporate the teachings of Tomar to provide a method that selects or obtains the highest said similarity based on the usage of the style to content images taught by Li. Doing so would also have the ability to produce a lower score directly proportional to the distance between vectors as stated by Tomar ([0046] recited).
14 Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1) as of claim 1, further in view of Dundar et al. (US 20190244060 A1) and Zhu et al. (US 20060177100 A1).
15 Regarding claim 2, Li in view of Tomar teach a process as in Claim 1 (see claim 1 rejection above), but does not explicitly teach wherein the step of computing the parameter λij comprises computing
PNG
media_image1.png
69
134
media_image1.png
Greyscale
wherein, in the above equation, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly, and the selecting step comprises selecting the style image Isw with the highest value of λIW for each i-value.
16 Zhu teaches computing
PNG
media_image1.png
69
134
media_image1.png
Greyscale
wherein, in the above equation, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly… with the highest value of λIW for each i-value ([0059] reciting “.gradient.log I(x, y)=[.differential..sub.x log I(x, y),.differential..sub.y log I(x, y)].sup.T Two log-gradient images .theta..sub.x log I(x, y) and .differential..sub.y log I(x, y) reflect the horizontal and vertical structures in the image...An efficient implementation of logarithm image log I(x, y) is through mapping with a pre-calculated table, e.g. [log(1), . . . , log(255)] for 8-bit images, whose pixel values range from 0, 1, . . . , 255.”).
17 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar) to incorporate the teachings of Zhu to provide a similar type of method/formula that can relate to the claim limitations like having a highest value in the range while also managing to utilize finding the similarity of images with pixels as taught by Li in view of Tomar. Doing so would have an efficient implementation of logarithm image as stated by Zhu ([0059] recited).
18 Li in view of Tomar and Zhu does not explicitly teach computing …, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly, and the selecting step comprises selecting the style image Isw with the highest value of λIW for each i-value
19 Dundar teaches wherein the step of computing the parameter λij comprises computing …, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly, and the selecting step comprises selecting the style image Isw with the highest value of λIW for each i-value ([0050] reciting “The smoothing operation has two goals. First, pixels with similar content in a local neighborhood should be stylized similarly. Second, the output should not deviate significantly from the stylized photorealistic image generated by the style transfer neural network model 110 in order to maintain the global stylization effects. In an embodiment, all pixels may be represented as nodes in a graph and an affinity matrix W={w.sub.ij}∈R.sup.N×N (N is the number of pixels) is defined to describe pixel similarities.”; [0052] reciting “…and S is the normalized Laplacian matrix computed from I.sub.C, i.e., S=D.sup.−1/2WD.sup.−1/2∈R.sup.N×N. As the constructed graph is often sparsely connected (i.e., most elements in W are zero), the inverse operation in equation (6) can be computed efficiently.”).
20 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar and Zhu) to incorporate the teachings of Dundar to provide a method that can calculate similarity values with the number of pixels using a similar method of the claimed limitation while using the style images from Li in view of Tomar and Zhu. Doing so would help remove artifacts and hence improve photorealism as stated by Dundar ([0052] recited).
21 Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1) as of claim 1, further in view of Jin et al. (US 20190057527 A1) and Dale et al. (US 20130129231 A1).
22 Regarding claim 3, Li in view of Tomar teach a process as in Claim 1 (see claim 1 rejection above), but does not explicitly teach wherein the parameter λij, relates inversely to said similarity and the selecting step comprises selecting the style image Isw with the smallest value of λij for a given i-value.
23 Jin teaches wherein the parameter λij, relates inversely to said similarity and the selecting step comprises selecting the style image Isw with the smallest value of λij for a given i-value ([0060] reciting “As discussed above, the points in the 128-D style embedding have been determined for the superpixels in the region 600, describing style properties such as media type, color distribution, feeling, visual composition, and so forth…In this example, the point in the 128-D style embedding for the superpixel 610 would have the lowest weight, the point in the 128-D style embedding for the superpixel 606 would have a higher weight, and the point in the 128-D style embedding for the superpixel 608 would have the highest weight, based on the respective areas of overlap with the patch 604.”).
24 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar) to incorporate the teachings of Jin to provide a method that can select a type of style image with a smaller values, utilizing the style images taught by Li in view of Tomar. Doing so would allow to determine style properties such as media type, color distribution, feeling visual composition, and so forth as stated by Jin ([0060] recited).
25 Li in view of Tomar and Jin does not explicitly teach wherein the parameter λij, relates inversely to said similarity…
26 Dale teaches wherein the parameter λij, relates inversely to said similarity… ([0074] reciting “For example, display module 116 may linearly interpolate the similarity metric between the labeled face and the unlabeled face. The distance value may be inversely proportional to the similarity metric. For example, a higher probability similarity metric may result in a smaller distance value.”; [0094] reciting “As another example, the image labeling system may be applied to labeling content in video scenes. As yet another example, the image labeling system may be applied to labeling web page designs to indicate design styles for the web pages. The methods of the image labeling system described herein may be applied to any type of labeling system that is based on a similarity comparison.”).
27 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar and Jin) to incorporate the teachings of Dale to provide a method to view the parameter or data inversely in regards to the similarity comparisons, using the style and content images taught by Li in view of Tomar and Jin. Doing so would allow visual representations to be generated as stated by Dale ([0094] recited).
28 Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1), Jin et al. (US 20190057527 A1) and Dale et al. (US 20130129231 A1) as of claims 1 and 3, further in view of Dundar et al. (US 20190244060 A1) and Zhu et al. (US 20060177100 A1).
29 Regarding claim 4, Li in view of Tomar, Jin, and Dale teach a process as in Claim 3 (see claims 1 and 3 rejections above), but does not explicitly teach wherein the step of computing the parameter λij comprises computing
PNG
media_image2.png
51
152
media_image2.png
Greyscale
wherein, in the above equation, Ci and Sij are the number of pixels labelled as i on Ic and Isj correspondingly, and the selecting step comprises selecting the style image Isw with the lowest value of λIW for each i-value.
30 Zhu teaches computing
PNG
media_image2.png
51
152
media_image2.png
Greyscale
wherein, in the above equation, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly… with the lowest value of λIW for each i-value ([0059] reciting “.gradient.log I(x, y)=[.differential..sub.x log I(x, y),.differential..sub.y log I(x, y)].sup.T Two log-gradient images .theta..sub.x log I(x, y) and .differential..sub.y log I(x, y) reflect the horizontal and vertical structures in the image…An efficient implementation of logarithm image log I(x, y) is through mapping with a pre-calculated table, e.g. [log(1), . . . , log(255)] for 8-bit images, whose pixel values range from 0, 1, . . . , 255.”).
31 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar, Jin, and Dale) to incorporate the teachings of Zhu to provide a similar type of method/formula that can relate to the claim limitations like having a lowest value in the range while also managing to utilize finding the similarity of images with pixels as taught by Li in view of Tomar, Jin, and Dale. Doing so would have an efficient implementation of logarithm image as stated by Zhu ([0059] recited).
32 Li in view of Tomar and Zhu does not explicitly teach computing …, Ci and Sij are the number of pixels labelled as i on Ic and Isj, correspondingly, and the selecting step comprises selecting the style image Isw with the lowest value of λIW for each i-value.
33 Dundar teaches wherein the step of computing the parameter λij comprises computing … wherein, in the above equation, Ci and Sij are the number of pixels labelled as i on Ic and Isj correspondingly, and the selecting step comprises selecting the style image Isw with the lowest value of λIW for each i-value ([0050] reciting “The smoothing operation has two goals. First, pixels with similar content in a local neighborhood should be stylized similarly. Second, the output should not deviate significantly from the stylized photorealistic image generated by the style transfer neural network model 110 in order to maintain the global stylization effects. In an embodiment, all pixels may be represented as nodes in a graph and an affinity matrix W={w.sub.ij}∈R.sup.N×N (N is the number of pixels) is defined to describe pixel similarities.”; [0052] reciting “…and S is the normalized Laplacian matrix computed from I.sub.C, i.e., S=D.sup.−1/2WD.sup.−1/2∈R.sup.N×N. As the constructed graph is often sparsely connected (i.e., most elements in W are zero), the inverse operation in equation (6) can be computed efficiently.”).
34 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar, Jin, Dale, and Zhu) to incorporate the teachings of Dundar to provide a method that can calculate similarity values with the number of pixels using a similar method, in addition to the type of inverse operation, of the claimed limitation while using the style images from Li in view of Tomar, Jin, Dale, and Zhu. Doing so would help remove artifacts and hence improve photorealism as stated by Dundar ([0052] recited).
35 Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1), Jin et al. (US 20190057527 A1) and Dale et al. (US 20130129231 A1) as of claims 1 and 3, further in view of Hu et al. (US 20200193491 A1).
36 Regarding claim 5, Li in view of Tomar, Jin, and Dale teach a process as in any one of Claims 1 or 3 (see claims 1 and 3 rejections above), but does not explicitly teach wherein the step of computing the parameter λij comprises computing a similarity parameter λij comprising constituent parameters other than the number of pixels labelled as i on Ic and Isj.
37 Hu teaches wherein the step of computing the parameter λij comprises computing a similarity parameter λij comprising constituent parameters other than the number of pixels labelled as i on Ic and Isj. ([0003] reciting “According to one aspect of the present disclosure, there is provided a computer-implemented method for determining product price, comprising: acquiring structural parameters and electrical parameters of a product; constructing appearance picture of the product with the structural parameters of the product, and comparing similarities between the appearance picture of the product and appearance pictures of historical products to obtain an appearance similarity ranking; comparing similarities between the electrical parameters of the product and electrical parameters of the historical products to obtain an electrical-parameter similarity ranking; obtaining a comprehensive ranking with respect to the structural parameters and the electrical parameters based on cost weights of structural components and electrical elements, the appearance similarity ranking and the electrical-parameter similarity ranking; and determining a bill of materials for the product based on the comprehensive ranking, and calculating a price for the product based on the bill of materials for the product.”)
38 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar, Jin, and Dale) to incorporate the teachings of Hu to provide a method that contains a parameter or certain data based on a similarity parameter or data without including the number of pixels, while utilizing the specific parameters taught by Li in view of Tomar, Jin, and Dale. Doing so would allow the appearance of an appearance similarity ranking as stated by Hu ([0004] recited).
39 Claim(s) 6 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European conference on computer vision (ECCV) (pp. 453-468) (hereinafter Li) in view of Tomar et al. (US 20210382936 A1), Jin et al. (US 20190057527 A1) Dale et al. (US 20130129231 A1), and Hu et al. (US 20200193491 A1) as of claims 1, 3, and 5, further in view of Liu et al. (US 20200327709 A1).
40 Regarding claim 6, Li in view of Tomar, Jin, Dale, and Zhenggang Hu teach a process as in Claim 5 (see claims 1, 3, and 5 rejections above), but does not explicitly teach wherein the constituent parameters comprise the number of entities labelled as i in the content image Ic and in the style images Isj.
41 Liu teaches wherein the constituent parameters comprise the number of entities labelled as i in the content image Ic and in the style images Isj. ([0087] reciting “…a modification or manipulation of the CNN in between each of the training sets allow certain parameters to be retained from each training, and thus allowing a CNN to retain some knowledge of an object identification training whilst being retrained with image emotion dataset to identify images with specific emotions… a loose and indirect, or even disassociated relationship between object identification and style identification by the CNN and thus allowing a greater accuracy for the CNN to identify an emotion from the rendering styles of an image, rather than from the contents of the image, whilst at the same time, offering an advantage that the CNN can nonetheless perform content or object identification that may be useful in certain situations, such as those of functions achieved by the system for rendering an image as described herein.”)
42 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar, Jin, Dale, and Hu) to incorporate the teachings of Liu to provide a method that can provide a type of parameter that can contain a certain label(s) (or identification(s)) with the specific entities utilizing the style and content images taught by Li in view of Tomar, Jin, Dale, and Hu. Doing so would allow a greater accuracy to identify the emotion from the rendering styles of the images as stated by Liu ([0087] recited).
43 Regarding claim 7, Li in view of Tomar, Jin, Dale, and Hu teach a process as in Claim 5 (see claims 1, 3, and 5 rejections above), but does not explicitly teach wherein the constituent parameters comprise the size of the entities of a particular class i in the content image Ic and in the style images Isj.
44 Liu teaches wherein the constituent parameters comprise the size of the entities of a particular class i in the content image Ic and in the style images Isj. ([0087] reciting “…a modification or manipulation of the CNN in between each of the training sets allow certain parameters to be retained from each training, and thus allowing a CNN to retain some knowledge of an object identification training whilst being retrained with image emotion dataset to identify images with specific emotions…;” [0111] reciting “Thus in one example embodiment, these steps are able to extract content representation from any given input source image. As this image is filtered in each layer of the CNN, a filter 526, which can, in one example, each be implemented to have the size of M.sub.l×M.sub.l, with the lth layer having N, distinct filters.”; [0115] reciting “Accordingly, a pre-trained CNN, whether it be a separate pre-trained CNN or the same pre-trained CNN 520 as above, once trained for detecting and classifying rendering styles to emotion scores, can similarly be further processed to extract rendering styles from an inputted style image 504I that would include a desired rendering style.”)
45 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Li in view of Tomar, Jin, Dale, and Hu) to incorporate the teachings of Liu to provide a method that can provide a type of size from a parameter that can contain the style and content images taught by Li in view of Tomar, Jin, Dale, and Hu. Doing so would allow the ability to have a desired rendering style as stated by Liu ([0115] recited).
Conclusion
46 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/ Examiner, Art Unit 2614
/ABDERRAHIM MEROUAN/ Primary Examiner, Art Unit 2614