DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/23/2024 is being considered by the examiner.
Specification
35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, requires the specification to be written in “full, clear, concise, and exact terms.” The specification is replete with terms which are not clear, concise and exact. The specification should be revised carefully in order to comply with 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112. Examples of some unclear, inexact or verbose terms used in the specification are: Terms that are cited in Drawings and in the Detailed Description a couple of times, are not cited throughout the rest of paragraphs in the Detailed Description. In paragraph 0200, it discloses and labels “electronic device” with reference number 600. Then, further down in paragraph 0205 and on, it mentions “electronic device” multiple times, with no proper label. As well as many other paragraphs without a proper label for “electronic device” when mentioned. This goes for many different terms throughout the detailed disclosure that do not have a proper label reference. Further revision is required, additional typos should be fixed accordingly.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the language of the claims when taken as a whole, raise questions as whether the claims fall within any of the statutory categories of invention. Claim 1 refers to “wear trigger operation…” of which the Examiner deems maybe interpreted as solely performed via software. Although the preamble of claim 1 recites “method for effect processing,” this element solely occurs within the preamble and in combination with a such a definition from Applicant’s specification, paragraph 0164, allows for the interpretation that the “wear trigger operation” can solely be performed in “software” thus constituting a rejection due to failing to fall within at least one of the statutory categories. Therefore, such claimed elements are software per se, which fails to fall within a statutory category of invention and necessitates the rejection of claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 4, 7, 9, 10, 12, 15, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over LU (No. WO-2023093679-A1 “Lu”) in view of GUO (No. CN-110580733-A “Guo”) and in further view of KOENIG (No. US-10504264-B1 “Koenig”).
Regarding claim 1, Lu teaches “A method for effect processing (an image processing method; Pg.1 Para 4), comprising:
in response to a wear trigger operation on an effect wearable item, performing model blocking processing on an effect three-dimensional model of the effect wearable item to obtain an effect blocking three-dimensional model;” (the item to be worn is a rendering fitting special effect, and the item to be worn can be determined according to the trigger operation of the user; Pg.3 Para 3);
(Determine the target object according to the tag information in the image to be processed; or, Taking an object to be processed whose display ratio is larger than a preset display ratio in the image to be processed as a target object; or, All objects to be processed in the image to be processed are used as the target objects; Pg.13 Para 4-6);
(Determine the wear effect to be processed corresponding to the item to be worn, and obtain the target object to be displayed wearing the item to be worn according to the material information of the item to be worn, the wear effect to be processed, and the three-dimensional body model; Pg.4 Para 9);
Lu discloses the determining of the target object by the tag information, which teaches that the wearable item is processed to be a 3D model. Thus, preforming model blocking on the 3D model of the wearable item. So that the 3D model of the wearable item can then be processed with effect blocking to make an effect of the wearable item. Lu teaches that once the target object (wearable item) is chosen, effects are applied to it, which teaches the wear effect processing corresponding to the wearable object.
However, Lu fails to teach “acquiring a target object original image currently displayed comprising a target object, and determining an object contour image of the target object original image, wherein a wear body of the effect wearable item is located on the target object; and
based on the effect blocking three-dimensional model in combination with the object contour image, determining and displaying an effect combined image.”
Guo teaches “acquiring a target object original image currently displayed comprising a target object, and determining an object contour image of the target object original image, wherein a wear body of the effect wearable item is located on the target object; and”
(The first two-dimensional feature point may refer to a key point in a human face in the first image that reflects its posture or expression, for example, a point on an eyebrow, an eye corner, a nose tip, a lip line, a contour line of a face, and the like; Pg. 5 Para 8);
Koenig teaches “based on the effect blocking three-dimensional model in combination with the object contour image, determining and displaying an effect combined image.”
(the reference image to determine positions of the object (e.g., the individual) from the further image for the purpose of having the object correctly placed into a combined image; Summary, Col 1 Para 3 Line 64);
Koenig teaches having a combined image, with an object that is positioned based on the individual in another image, thus, displaying a combined image of two images. The effect blocking processing of 3D objects, turning wearable items into effects, that Lu teaches in combination with Koenig teaches the placement of an effect object with another image produces a combine image.
The motivation for the above is to have a more accurate and efficient generation and display of the target object on the 3D model.
Lu, Guo and Koenig are analogous art as they are related to image processing and effect utilizing 3D model.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu by acquiring a target object original image currently displayed comprising a target object, and determining an object contour image of the target object original image, wherein a wear body of the effect wearable item is located on the target object taught by Guo and based on the effect blocking three-dimensional model in combination with the object contour image, determining and displaying an effect combined image taught by Koenig. Combining that with Lu’s operation of model blocking processing.
Regarding claim 9, Lu teaches “An electronic device, comprising:” (an electronic device; Pg.2 Para 2);
“one or more processors; and” (one or more processors; Pg.2 Para 3);
“at least one memory, configured to store one or more programs,” (a program stored in a read-only memory; Pg. 10 Para 6);
(storage means configured to store one or more programs; Pg.2 Para 4);
“wherein when the one or more programs are executed by the one or more processors, causing the one or more processors to implement a method for effect processing, and the method for effect processing comprises:” (one or more programs are executed by the one or more processors, the one or more processors implement the image processing method; Pg.2 Para 5);
Claim 9 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 1. Therefore, claim 9 limitations are also rejected with the same rationale as regarding claim 1.
Regarding claim 17, Lu teaches A non-transient computer-readable storage medium, comprising computer-executable instructions, wherein the computer-executable instructions, when executed by a processor of a computer, implement a method for effect processing, and the method for effect processing comprises:” (a non-transitory computer readable medium; Pg.10 Para 8);
(a storage medium containing computer-executable instructions; Pg.2 Para 6);
(a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the image processing method; Pg.11 Para 4);
Claim 17 is directed to non-transient computer-readable storage medium and its limitations are similar in scope and functions performed by the effect processing method of claim 1. Therefore, claim 17 limitations are also rejected with the same rationale as regarding claim 1.
Regarding claim 2, while Lu and Koenig fail to teach all limitations for claim 2, Guo teaches “The method according to claim 1, wherein the performing model blocking processing on an effect three-dimensional model of the effect wearable item to obtain an effect blocking three-dimensional model comprises:
based on model vertex information of the effect three-dimensional model, constructing a two-dimensional blocking plane, and determining a plane placing position that meets a condition from the effect three-dimensional model; and” (two-dimensional feature point is obtained by projecting the three-dimensional feature point in the preset three-dimensional model into the two-dimensional space; Pg.5 Para7);
(the basic idea of the three-dimensional deformation model is: treat the face space as a linear space; Pg.5 Para 4);
(Determining a first two-dimensional feature point corresponding to a human face in the first image; Pg.5 Para 6);
“combining the two-dimensional blocking plane with the effect three-dimensional model at the plane placing position, to obtain the effect blocking three-dimensional model obtained through combination.”
(the two-dimensional projection image corresponding to the target three-dimensional model is determined according to the posture parameters of the human face in the second image; Pg.7 Para 11);
The motivation for the above is to have an accurate special effect on the 3D model when we combine the two images.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu and Koenig by constructing a two-dimensional blocking plane, and determining a plane placing position that meets a condition from the effect three-dimensional model; and combining the two-dimensional blocking plane with the effect three-dimensional model at the plane placing position, to obtain the effect blocking three-dimensional model obtained through combination as taught by Guo.
Claim 10 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 2. Therefore, claim 10 limitations are also rejected with the same rationale as regarding claim 2.
Claim 18 is directed to non-transient computer-readable storage medium and its limitations are similar in scope and functions performed by the effect processing method of claim 2. Therefore, claim 18 limitations are also rejected with the same rationale as regarding claim 2.
Regarding claim 4, while Koenig fails to teach all limitations for claim 2, Lu teaches the limitation “performing image fusion on the second effect processing image and the target object original image, to obtain and display the effect combined image after fusion.” (The first implementation manner may be: if the pixels corresponding to the item to be worn do not cover the pixels of the original item to be worn, then erasing the exposed pixel points of the original item to be worn, so as to obtain Set the target object to be displayed for the condition; Pg.8 Para 2);
(adjust the body model of the target object to be displayed based on the body model multiple body parts to update the target object based on the adjusted body parts; Pg.8 Para 4);
Lu discloses how erasing the exposed pixel points of the item to obtain a target object to displayed, which show how the images can be fused together. By updating the target object to the body part, it discloses how the two images can be fused together. One is the item to be worn and the other is the body part it is worn on.
Lu and Koenig fail to teach all other limitations: “The method according to claim 1, wherein the based on the effect blocking three-dimensional model in combination with the object contour image, determining and displaying an effect combined image comprises:
performing two-dimensional conversion rendering on the effect blocking three-dimensional model to obtain an effect processing intermediate image, wherein the effect processing intermediate image comprises a plane rendering region of a two-dimensional blocking plane used for effect processing;”
“performing image replacement on the plane rendering region of the effect processing intermediate image based on the object contour image, to obtain a first effect processing image after the image replacement;”
“performing color filling on the first effect processing image according to a set color filling strategy, to obtain a second effect processing image after the color filling; and”
However, Guo teaches “The method according to claim 1, wherein the based on the effect blocking three-dimensional model in combination with the object contour image, determining and displaying an effect combined image comprises:
performing two-dimensional conversion rendering on the effect blocking three-dimensional model to obtain an effect processing intermediate image, wherein the effect processing intermediate image comprises a plane rendering region of a two-dimensional blocking plane used for effect processing;” (the complete texture image may be rendered into the second image. Rendering in computer graphics refers to the process of generating images from three-dimensional models; Pg.7 Para 3);
(two-dimensional projection image corresponding to the target three-dimensional model according to a posture parameter of a face in the second image; Pg.7 Para 6);
The texture image is rendered to the 3D target based on the posture parameter, which is the region of the 2D image that is used for effect processing.
“performing image replacement on the plane rendering region of the effect processing intermediate image based on the object contour image, to obtain a first effect processing image after the image replacement;” (a complete texture image is rendered into a second image, so that a surface of a face (a face after replacement) in the second image presents a complete texture image; Pg.8 Para 7);
(so that the replaced human face has the preset expression; Pg.8 Para 7);
The texture image relates to the contour image that is used to obtain the rendered image after the replacement.
“performing color filling on the first effect processing image according to a set color filling strategy, to obtain a second effect processing image after the color filling; and” (Texture images can be characterized by visual features such as color and grayscale; Pg.3 Para 7);
(the complete texture image is rendered to the face region of the second image to obtain a third image, a conversion model corresponding to a facial style template is also used; Pg.10 Para 2);
The textured images can color filled in and that textured image when complete can be rendered to obtain another processed image after the color filling.
The motivation for the above is to have a more accurate operations on the image rendering, color filling and image replacement.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu by performing two-dimensional conversion rendering on the effect blocking three-dimensional model to obtain an effect processing intermediate image, wherein the effect processing intermediate image comprises a plane rendering region of a two-dimensional blocking plane used for effect processing; performing image replacement on the plane rendering region of the effect processing intermediate image based on the object contour image, to obtain a first effect processing image after the image replacement; performing color filling on the first effect processing image according to a set color filling strategy, to obtain a second effect processing image after the color filling as taught by Guo.
Claim 12 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 4. Therefore, claim 12 limitations are also rejected with the same rationale as regarding claim 4.
Claim 20 is directed to non-transient computer-readable storage medium and its limitations are similar in scope and functions performed by the effect processing method of claim 3. Therefore, claim 20 limitations are also rejected with the same rationale as regarding claim 3.
Regarding claim 7, while Lu and Guo fail to teach all limitations for claim 7, Koenig teaches “The method according to claim 1, further comprising:
when a reshaping beautification operation on a selected body on the target object is detected, acquiring selected body reshaping information corresponding to the reshaping beautification operation; and” (The user can also transform the reference image or further image, with the transformation including zooming, panning, scrolling, cropping, performing perspective corrections, changing angle of view, applying color filters, to name a few; Col 1 Lin 67-2 Line 1-3);
(the images can be seamlessly merged into a combined image by placing the object (e.g., the individual) into the first reference image according to the determined positions. Thereafter, digital filters and beautification techniques can be applied to the combined image; Col 2 Line 9-13);
“updating the object contour image based on the selected body reshaping information.” (The user can also transform the reference image or further image, with the transformation; Col 1 Line 67);
(Thereafter, digital filters and beautification techniques can be applied to the combined image; Col 2 Line 13);
The transformation being changes the user can apply to the image. The user can transform the object contour image based on their chosen reshaping information and filter the desire.
The motivation for the above is to have a more precise and accurate beautification operation on the image for user friendly modifications.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu and Guo by acquiring selected body reshaping information corresponding to the reshaping beautification operation; and updating the object contour image based on the selected body reshaping information as taught by Koenig.
Claim 15 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 7. Therefore, claim 15 limitations are also rejected with the same rationale as regarding claim 7.
Claim(s) 3, 11, and 19 is rejected under 35 U.S.C. 103 as being unpatentable over LU in view of GUO and KOENIG and in further view of YANG (No. CN-114445601-A “Yang”).
Regarding claim 3, while Lu, Guo and Koenig all fail to teach the limitations for claim 3, Yang teaches “The method according to claim 2, wherein the based on model vertex information of the effect three-dimensional model, constructing a two-dimensional blocking plane, and determining a plane placing position that meets a condition from the effect three-dimensional model comprises:
based on the model vertex information of the effect three-dimensional model, determining a maximum plane length value and a central axis of a bottom plane formed by bottom vertexes of the effect three-dimensional model;” (three-dimensional space, the coordinates of vertices include coordinate values in three directions: X, Y, and Z. Among them, the Z value refers to the distance from the vertex to the XY plane; Pg.7 Para 11);
(the vertex depth refers to the distance from the vertices to the X axis The distance from the plane formed by the Y axis, that is, the z coordinate value of the vertex; Pg.9 Para 11);
The 3D space includes vertices and directions X, Y and Z that relate to the plane length value and the bottom vertexes of the model in the 3D space.
“determining a blocking plane width value based on the maximum plane length value, and constructing the two-dimensional blocking plane in combination with a pre-obtained blocking plane height value; and determining position information of the central axis as the plane placing position.” (the coordinates of vertices include coordinate values in three directions: X, Y, and Z.);
(the mirror image refers to symmetry along a central axis; Pg.14 Para 4);
(determining the pose adjustment information according to the position mapping relationship; Pg.14 Para 6);
(the adjusted body part model is mapped to a two-dimensional model in a two-dimensional plane; Pg.8 Para 14);
The X and Y relates to the width value and max plane length. The plane placing position relates to what is disclosed in the position mapping of a 2D plane.
The motivation for the above is to have accurate vertex information of the 3D model.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu, Guo and Koenig by determining a maximum plane length value and a central axis of a bottom plane formed by bottom vertexes of the effect three-dimensional model; determining a blocking plane width value based on the maximum plane length value, and constructing the two-dimensional blocking plane in combination with a pre-obtained blocking plane height value; and determining position information of the central axis as the plane placing position as taught by Yang.
Claim 11 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 3. Therefore, claim 11 limitations are also rejected with the same rationale as regarding claim 3.
Claim 19 is directed to non-transient computer-readable storage medium and its limitations are similar in scope and functions performed by the effect processing method of claim 3. Therefore, claim 19 limitations are also rejected with the same rationale as regarding claim 3.
Claim(s) 5, 6, 8, 13, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over LU in view of GUO and KOENIG and in further view of KUMADA (No. JP-2004265292-A “Kumada”).
Regarding claim 5, Lu, Guo and Koenig all fail to teach all the limitations of claim 6, however Kumada teaches “The method according to claim 4, wherein the performing image replacement on the plane rendering region of the effect processing intermediate image based on the object contour image, to obtain a first effect processing image after the image replacement comprises:
determining pixel coordinates of corresponding pixels of the plane rendering region on the effect processing intermediate image;” (The integration processing unit 4 writes the coordinate point (x, y) of the pixel; Pg.6);
“searching the object contour image for contour pixel information of pixels corresponding to the pixel coordinates;” (the pixels in the effective area of the background difference image BG (x, y) are searched. As described above, the values of pixels around the search point are repeatedly checked against the contour extraction image O (x, y); Pg.6);
(The integration processing unit 4 writes the coordinate point (x, y) of the pixel as a search point in the search point memory; Pg.6);
(background difference image BG (x, y) and the contour extraction image O (x, y) that are located around the search point (x, y), that is, around the search point (x, y); Pg.7);
“replacing plane pixel information of pixels in the plane rendering region with the contour pixel information, to construct a contour filling region; and” (This moving body writing process is repeated until all pixels are completed in step 803, without distinguishing between the valid area S1 and the invalid area S2. By repeating this “moving body writing process”, the contour extraction image O (x, x, y) connected to the image of the highly reliable person Ma in the background difference image BG (x, y); Pg.7);
(The moving body contour image M (x, y) obtained by the integration processing unit 4 is fed back to the background information update processing unit 5; Pg.7);
“determining an image comprising the contour filling region as the first effect processing image.” (Determination is made (step 402). ... The moving object presence / absence determination unit 12 displays the determination result of the presence or absence of the moving object for each pixel; Pg.7);
The motivation for the above is to have accurate pixel information for a better rendering of the effect processing for the image.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu, Guo and Koenig by determining pixel coordinates of corresponding pixels of the plane rendering region on the effect processing intermediate image; searching the object contour image for contour pixel information of pixels corresponding to the pixel coordinates; replacing plane pixel information of pixels in the plane rendering region with the contour pixel information, to construct a contour filling region; and determining an image comprising the contour filling region as the first effect processing image as taught by Kumada.
Claim 13 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 5. Therefore, claim 13 limitations are also rejected with the same rationale as regarding claim 5.
Regarding claim 6, Lu, Guo and Koenig all fail to teach all the limitations of claim 6, however Kumada teaches “The method according to claim 4, wherein the performing color filling on the first effect processing image according to a set color filling strategy, to obtain a second effect processing image after the color filling comprises:
determining a contour background region and an object contour region from a contour filling region of the first effect processing image;” (Therefore, if the contour of the changed object in the contour extraction image O (x, y) connected to the contour of this image is obtained based on the image of the person Ma in the background difference image BG (x, y), the entire contour of the person Ma is obtained. It is possible to obtain a moving body outline image M (x, y) (FIG. 6 (h)) with high reliability. FIG. 8 shows a flow of processing for generating the moving body contour image M (x, y) from the background difference image BG (x, y) and the contour extraction image O (x, y) in the integration processing unit 4; Pg.6);
“searching the target object original image for object pixel coordinates corresponding to the object contour region, and filling pixel values of the object pixel coordinates into the object contour region;” (the pixels in the effective area of the background difference image BG (x, y) are searched. As described above, the values of pixels around the search point are repeatedly checked against the contour extraction image O (x, y); Pg.6);
(determines whether or not the values of the corresponding pixels of the background difference image BG (x, y) and the contour extraction image O (x, y) are both “1”; Pg.6)
The target object is searched against the contour image as disclosed by Kumada. By the contour image being 1, the pixel values is filled for the coordinates of the contour image.
“searching a predetermined effect original rendering image for background pixel coordinates corresponding to the contour background region, and filling pixel values of the background pixel coordinates into the contour background region; and” (The integration processing unit 4 writes the coordinate point (x, y) of the pixel as a search point in the search point memory (step 901 shown in Fig. 9), and the movement corresponding to the search point (x, y). The value of the pixel in the body contour image M (x, y) is set to “1” (see step 902: FIG. 10C), and the search point (x, y) is set so as not to be set as a search point again; Pg.6);
“denoting an image obtained after pixel value filling is performed on the object contour region and the contour background region as the second effect processing image.” (the contour extraction image O (x, y) are both “1”; Pg.6)
(Therefore, if the contour of the changed object in the contour extraction image O (x, y) connected to the contour of this image is obtained based on the image of the person Ma in the background difference image BG (x, y), the entire contour of the person Ma is obtained; Pg.6);
The entire contour being obtained relates to the image obtained after pixel filling, since this is the end of the process for image.
The motivation for the above is to have an efficient pixel value filling of the pixel in the processing image for better accuracy of the overall image.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu, Guo and Koenig by determining a contour background region and an object contour region from a contour filling region of the first effect processing image; searching the target object original image for object pixel coordinates corresponding to the object contour region, and filling pixel values of the object pixel coordinates into the object contour region; searching a predetermined effect original rendering image for background pixel coordinates corresponding to the contour background region, and filling pixel values of the background pixel coordinates into the contour background region; and denoting an image obtained after pixel value filling is performed on the object contour region and the contour background region as the second effect processing image as taught by Kamada.
Claim 14 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 6. Therefore, claim 14 limitations are also rejected with the same rationale as regarding claim 6.
Regarding claim 8, Lu teaches “The method according to claim 7, wherein the updating the object contour image based on the selected body reshaping information comprises:
determining a quantity of times of update cycle of the object contour image based on the selected body reshaping information;” (Adjusting multiple body parts in the target object to be displayed based on the body model, so as to update the target object based on the adjusted body parts; Pg.14 Para 9);
“performing pixel value update on all pixels in the object contour image according to a set pixel value update strategy;” (if the pixel corresponding to the item to be worn covers the pixel of the original item to be worn, the preset condition is satisfied, and the target in the image to be processed is updated based on the target object to be displayed object; Pg.7 Para 9);
However, Lu, Guo and Koenig all fail to teach “forming an updated object contour image based on updated pixel values of all the pixels, and adding 1 to a current update cycle count; and
returning to re-perform the pixel value update until the current update cycle count is equal to the quantity of times of update cycle.”
Kumada teaches “forming an updated object contour image based on updated pixel values of all the pixels, and adding 1 to a current update cycle count; and” (the image securing unit 6 includes a time-series input image I (x input from the imaging means 1 at a predetermined cycle; Pg.2);
(“Elapsed time ET” is counted up as ET1, ET2, ET3,... ETn. When the (n + 1); Pg.4);
(the “movement time TM” in the background information of the pixel is incremented by 1; Pg.8);
“returning to re-perform the pixel value update until the current update cycle count is equal to the quantity of times of update cycle.” (the image securing unit 6 includes a time-series input image I (x input from the imaging means 1 at a predetermined cycle; Pg.2);
(the latest pixel value (previous pixel value) in the “time-series data” in the background information of the pixel is set as the current pixel value; Pg.8);
The time series data relates to the update cycle used during reshaping information and pixel value updates. By having the time series being updated, the pixel value is also updated based on the updated cycle.
The motivation for the above is to have an updated contour image passed on accurate pixel information.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Lu, Guo and Koenig by forming an updated object contour image based on updated pixel values of all the pixels, and adding 1 to a current update cycle count; and returning to re-perform the pixel value update until the current update cycle count is equal to the quantity of times of update cycle as taught by Kamada.
Claim 16 is directed to an electronic device and its limitations are similar in scope and functions performed by the effect processing method of claim 8. Therefore, claim 16 limitations are also rejected with the same rationale as regarding claim 8.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
CN-115063565-A (Wang) – Discloses a try-on method and device of a wearable article. Acquiring a target body part of a user, wherein the target body part is used for trying on the wearable article.
CN-114758027-A (Diao) – Discloses obtaining a to-be-processed image corresponding to a target image special effect in response to a special effect triggering operation used for enabling the target image special effect.
CN-114494556-A (Tao) – Discloses a special effect rendering method that obtains a target image which comprises a target object and renders the 3D object model to the target image based on the depth information of the 3D object model.
CN-114529657-A (Leng) – Discloses a rendered image generation method that obtains a first rendering model generated after a three-dimensional model of a target object is rendered through a color map.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIGITER D PROTAZI whose telephone number is (571)272-7995. The examiner can normally be reached Monday - Friday 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A Broome can be reached at 5712722931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.D.P./Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612