Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN 202210220029, filed on March 8, 2022.
Information Disclosure Statement
The information disclosure statements (IDS(s)) submitted on 10/07/2024 and 06/24/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The drawings are objected to under 37 CFR 1.83(a) because they fail to show user equipment 2 as described in the specification in [0040] and [0041]. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to under 37 CFR 1.83(a) because they fail to show vertical lines with endpoints 306 and 307 in Fig. 3 as described in the specification in [0070]. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
In paragraph [0035], “unsmoothnesss” should read “unsmoothness”.
In paragraph [0084], “Mesh” should read “mesh”.
Appropriate correction is required.
Claim Objections
Claims 3 and 16 are objected to because of the following informalities:
In Claim 3, “by other two” should read “by the other two”.
In Claim 16, “by other two” should read “by the other two”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites “in response to the line effect processing request, obtaining a time-stamp initiating the line effect processing request”, this is unclear because if the time-stamp is obtained in response to the line effect processing request, it cannot initiate the line effect processing request because the line effect processing request is already initiated. Examiner will interpret “in response to the line effect processing request, obtaining a time-stamp initiating the line effect processing request” as “in response to the line effect processing request, obtaining a time-stamp”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 7, 11, 12, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over by Yan (CN 114125320 A) in view of Hou et al. (US 11232285 B2), hereinafter referenced as Hou.
Regarding Claim 1, Yan discloses a method of line effect processing (Yan, [0006], discloses a method for generating image special effects), comprising…
collecting a plurality of contour points formed by an object contour of a target object in an image to be processed (Yan: [0085], teaches identifying outline key points outside a target object <contour points of a target object> in a video frame <image to be processed>; [Fig. 1], Step 102);
PNG
media_image1.png
590
634
media_image1.png
Greyscale
performing a point expansion processing on the plurality of contour points to obtain a plurality of vertices (Yan: [0093], discloses establishing extension points on two sides side of a key point <point expansion processing> <and therefore obtaining a plurality of vertices>; [Fig. 1], Step 103);
PNG
media_image2.png
520
562
media_image2.png
Greyscale
generating at least one texture curve based on adjacent pairs of vertices in the plurality of vertices (Yan: [0093], discloses forming a quadrilateral region according to extension points <vertices> corresponding to each pair of adjacent key points <the extension points corresponding to a key point are adjacent and the extension points on each side of the adjacent key points are also adjacent>, multiple quadrilateral regions are formed and connected which create a filling region <texture curve, referring to a procedural path used to generate special effects>; [Fig. 1], Step 103; [Fig. 2], extension points p6 and p8 are adjacent, as well as p7 and p8, p5 and p8, and p5 and p6. These adjacent pairs of vertices are connected to create a quadrilateral region, and the plurality of quadrilateral regions, forming <base of the texture curve> the filling area <the texture curve>);
generating an effect line frame with the at least one texture curve (Yan: [0100] discloses generating a filled area <effect line frame, an area filled with a special effect> by filling the filling area <texture curve> with a texture map that gives an effect; [Fig. 1], Step 104);
Yan does not disclose
and mapping the effect line frame to the image to be processed, to obtain a target image corresponding to the image to be processed.
However, Hou discloses
and mapping the effect line frame to the image to be processed, to obtain a target image corresponding to the image to be processed (Hou: [Col 3, ln 6], discloses adding a special effect material <line effect frame, an area with a special effect effect> to a face image <image to be processed> <therefore obtaining a target image corresponding to the image to be processed>).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to apply and/or modify the method as taught by Yan by mapping an effect an original image to create an image with a special effect as taught by Hou. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to add a stylistic flair to an image, imparting creativity.
Regarding Claim 11, it recites limitations similar in scope to Claim 1, but as an electronic device. As shown in the rejection, the combination of Yan and Hou disclose the limitations of Claim 1. Additionally, they disclose
An electronic device, comprising: a processor, and a memory, the memory stores computer executable instructions, and the processor executes the computer executable instructions stored in the memory so that the processor is configured with the line effect processing method (Yan: [0060], recites “an electronic device, including a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the image effect generation method.”) comprising: …
Regarding Claim 12, it recites limitations similar in scope to Claims 1 and 11, but as a non-transitory computer-readable storage medium. As shown in the rejection, the combination of Yan and Hou disclose the limitations of Claims 1 and 11. Additionally they disclose
A non-transitory computer-readable storage medium having stored therein computer executable instructions, that when executed by a processor, implement the line effect processing method (Yan: [0061], recites “a computer-readable storage medium, which, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, enables the electronic device to perform the image effect generation method”) comprising: …
Regarding Claims 6 and 19, the combination of Yan and Hou disclose the method and electronic device of Claims 1 and 11 respectively. They further disclose
mapping the at least on texture curve to a line frame model, to obtain a curve line frame corresponding to the object contour (Hou: [Col 4, ln 28], discloses adding a predesigned special-effect material <texture curve> to a corresponding grid area <line frame model> to obtain an initial face image to which the special effect is added, grid area is derived from the contours key points, see Figs. 2D-2E);
PNG
media_image3.png
362
217
media_image3.png
Greyscale
PNG
media_image4.png
364
222
media_image4.png
Greyscale
and setting a rendering attribute for a line frame area corresponding to the curve line frame, to obtain the effect line frame (Hou: [Col 5, ln 10], discloses adding color adjustment as a parameter <rendering attribute> to the special-effect material <corresponding to the curve line frame>).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device of Yan by adding a predesigned special-effect material with different parameters to a grid area as taught by Hou. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to enhance visualization, texture, and movement within an image or frame.
Regarding Claims 7 and 20, the combination of Yan and Hou disclose the method and electronic device of Claims 1 and 11 respectively. They further disclose
determining image positions corresponding to the plurality of contour points in the image to be processed (Yan: [0101], discloses mapping the texture directly onto the entire region or each quadrilateral area <both correspond with the key points on the image to be processed>);
determining the line frame positions corresponding to the plurality of contour points in the line effect frame (Yan: [0101], discloses mapping the texture directly onto the entire region or each quadrilateral area <both correspond with the key points in the texture map>);
performing position matching between the line frame positions and the image positions corresponding to the plurality of contour points to obtain a position matching relationship of the plurality of contour points (Yan: [0025], discloses establishing a one-to-one correspondence between the texture material sub-blocks and the filled sub regions; [0228], discloses the matching submodule to deform texture material so that the shape of the sub-block matches the shape of the corresponding filled sub-region <half of the quadrilateral area>);
and mapping the effect line frame to the image to be processed based on the position matching relationship of the plurality of contour points to obtain the target image corresponding to the image to be processed in response to an end of a mapping of the effect line frame (Yan: [0025], discloses establishing a one-to-one correspondence <position matching relationship> between the texture material sub-blocks <effect line frame> and the filling sub regions <where each subregion is built around the plurality of contour points in the image to be processed>; [0027], discloses filling <mapping, according to the one-to-one correspondence> the texture material sub-blocks <effect line frame> to the video frame <image to be processed> by filling the subregions <built around the plurality of contour points> with the texture material sub-blocks <effect line frame> <therefore obtaining the target image corresponding to the image to be processed in response to an end of a mapping of the effect line frame>).
Claims 2, 15, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yan and Hou in view of Hsu (US 8884955 B1), hereinafter referenced as Hsu.
Regarding Claims 2, 15, and 23, the combination of Yan and Hou disclose the method, electronic device, and non-transitory computer-readable medium of Claims 1, 11, and 12 respectively. The combination of Yan and Hou further disclose wherein, performing a point expansion processing on the plurality of contour points, and obtaining a plurality of vertices comprises
connecting two contour points satisfying an adjacent condition among the plurality of contour points, to obtain at least one contour line segment (Yan: [0093], discloses a line connecting two adjacent key points; [Fig. 1], Step 103; [Fig 2], reference points 01, 02, 03, 04, and the line connecting them <each segment, O1O2, O2O3, and O3O4, reads as a contour line segment, and therefore obtaining at least one contour line segment>);
determining an expansion line segment corresponding to the contour line segment based on a line width, to obtain an expansion line segment corresponding to the at least one contour line segment (Yan: [0122-0123], discloses establishing two extension points on the line of the extension point, equidistant from the key point <based on width> <and therefore obtaining an expansion line segment corresponding to the at least one contour line segment>);
determining, using the expansion line segment corresponding to the at least one contour line segment, two end points of the expansion line segment as two expansion points to obtain a plurality of expansion points comprised of expansion points of the at least one expansion line segment (Yan: [0122-0123], discloses the line connecting the two extension points corresponding to each key points passes through the key point, each key point is located in the middle of the two extension points <and therefore obtaining a plurality of expansion points comprised of expansion points of the at least one expansion line segment>; [Fig. 2]);
They do not disclose
and de-duplicating the plurality of contour points and the plurality of expansion points to obtain the plurality of vertices.
However, Hsu discloses
and de-duplicating the plurality of contour points and the plurality of expansion points to obtain the plurality of vertices (Hsu: [Col 7, ln 9], discloses a vertex reducing engine that merges common vertices to remove duplicates in geometry < therefore obtaining the plurality of vertices>).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to apply and/or modify the method, device, or non-transitory computer-readable medium disclosed by Yan and Hou by deduplicating vertices as taught by Hsu. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification in order to enhance memory efficiency and accelerate rendering to optimize performance.
Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yan, Hou, and Hsu in view of Liu et al. (US 2022/0319011 A1), hereinafter referenced as Liu1, and in further view of Liu (CN 112581620 A), hereinafter referenced as Liu2.
Regarding Claims 3 and 16, the combination of Yan, Hou, and Hsu disclose the method and electronic device of Claims 2 and 15. They further disclose
whether two vertices corresponding to the contour line segment comprise a tail point (Yan: [Fig. 2], illustrates two vertices in a contour line segment, for example, O2O3, or any of the connected key points O1-O4; [Fig. 2] also illustrates vertices in a contour line segment comprising a tail point, for example O1 in line segment O1O2, the initial stroke key point <reads as tail point, referring to one of the two end points of a contour, O4 would be the other tail point>);
PNG
media_image2.png
520
562
media_image2.png
Greyscale
in response to a condition, determining a line perpendicular to the contour line segment on a non-tail point of the two vertices according to the line width, and determining the other end point of the line as an expansion point corresponding to the contour line segment (Yan: [1117], discloses in response to key points being extracted, determining a line perpendicular to the key point; [Fig. 2] illustrates O1O2 and O3O4, both being contour line segments comprising tail points O1 and O4, respectively; [Fig. 2] illustrates a line perpendicular to the contour line segment on O3 <non-tail point> of O3O4 and the other end point of the line being expansion point p6; [0097], discloses extension point (p6) is established around a key point (O3) <reads on corresponding>; discloses establishing two extension points on the line of the extension point, equidistant from the key point <based on width, and by connecting the key point and the extension point, the line is determined according to line width>);
determining a line segment formed by connecting the expansion point with the tail point as the expansion line segment corresponding to the contour line segment (Yan: [0117], discloses determining a line segment connecting a key point to an extension point using a perpendicular line <determining the expansion line segment that corresponds to the contour line segment>; [Fig. 2], illustrates this with tail point O4, where a line segment p8O4 <expansion line segment> connects p8 <expansion point> and O4 <tail point>, and it corresponds with contour line segment because it is perpendicular to it);
in response to a condition, determining a line perpendicular to the contour line segment according to the line width for two contour points of the contour line segment (Yan: [1117], discloses in response to key points being extracted, determining a line perpendicular to the key point on the contour line; [Fig. 2], illustrates contour line segment O2O3 where neither vertex comprises a tail point, for each vertex, there is line perpendicular to the contour line segment according to the line width, O2p4 and O3p6; [0097], discloses extension point (p6) is established around a key point (O3) <reads on corresponding>; discloses establishing two extension points on the line of the extension point, equidistant from the key point <based on width, and by connecting the key point and the extension point, the line is determined according to line width>),
and determining a line segment formed by other two end points corresponding to the two lines as the expansion line segment (Yan: [Fig. 2], illustrates p4p6, interpreted as a line segment formed by the other two end points, corresponding to the two lines, O2p4 and O3p6, as the expansion line segment; [0063], discloses two expansion points with connecting lines passing through).
Yan, Hou, and Hsu fail to disclose detecting whether a tail point is completed, in response to a tail point being comprised, determining something and in response to no tail point being completed, determining something.
However, Liu1 discloses
detecting whether a tail point is comprised ([0016], discloses whether a point is an edge point <reads on tail point>);
in response to a tail point being comprised, determining ([0016], discloses if a point is found to be an edge point, it is marked) …;
in response to no tail point being comprised, determining ([0016], discloses if an edge point is not found, the image will continue being traversed) … .
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device taught by the combination of Yan and Hsu by determining whether a tail point is comprised and taking some action in response as taught by Liu1. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to treat tail points differently than non-tail points on a contour line segment.
However, Liu2 discloses
determining a vertical structure and determining and determining the other end point of the vertical line as an expansion point (Liu2: [0067], discloses determining a vertical vector with an endpoint corresponding to the extension point)
determining a vertical structure (Liu2: [0067], discloses determining a vertical vector)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device taught by the combination of Yan and Hsu by determining a vertical structure as taught by Liu2. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to impart structured and directed key point expansion while maintaining the original structure.
Claims 4, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yan and Hou in view of Fan et al. (“A realtime curvature-smooth interpolation scheme and motion planning for CNC machining of short line segments”, 2015), hereinafter referenced as Fan.
Regarding Claims 4 and 17, the combination of Yan and Hou disclose the method and electronic device of Claims 1 and 11 respectively. They further disclose
determining a plurality of line segments corresponding to the plurality of vertices (Yan: [0097], discloses the outlines <quadrilateral regions in Fig. 2> make up the filling area; [Fig. 2], illustrates a plurality of line segments corresponding to a plurality of vertices making up quadrilaterals);
and performing a line segment attribute configuration on the smooth curve of the set of line segments to obtain respective texture curves of the at least one set of line segments after the attribute configuration end (Yan: [0025], discloses establishing a one-to-one correspondence between the texture material sub-blocks and the filled sub regions <half of quadrilaterals, made up of line segments>; [0027], discloses the texture material sub-blocks are filled <attribute configuration> into the corresponding subregions on the video frame <where line segments making up the quadrilaterals and subregions are also affected by the fill>).
They fail to disclose
performing line segment intersection calculation on any two segments satisfying a condition of line segment intersection among the plurality of line segments, and obtaining an intersection corresponding to at least one set of line segments among the plurality of line segments for any set of line segments,
performing an intersection smoothing processing on two line segments of the set of line segments and an intersection of the two line segments, and obtaining a smooth curve corresponding to the set of line segments;
However, Fan discloses
performing line segment intersection calculation on any two segments satisfying a condition of line segment intersection among the plurality of line segments, and obtaining an intersection corresponding to at least one set of line segments among the plurality of line segments for any set of line segments (Fan: [Fig. 3], shows Pi and Pi-1, as the intersection points of sets of line segments b and a and line segments c and b, respectively; [Section 3.1], discloses setting <calculating> the point sequence, {Pi}i=0…N <where each point in the sequence is an intersection point on any two line segments satisfying a condition of line segment intersection among a plurality of line segments, ex. a, b, and c> <and therefore obtaining an intersection corresponding to at least one set of line segments corresponding to at least one set of line segments among the plurality of line segments for any of the line segments>),
PNG
media_image5.png
414
528
media_image5.png
Greyscale
performing an intersection smoothing processing on two line segments of the set of line segments and an intersection of the two line segments, and obtaining a smooth curve corresponding to the set of line segments (Fan: [Section 2], discloses a transitional scheme between two line segments using a Bezier curve to obtain a smooth transition, where point P is the point of intersection and the set of line segments are both segments in Fig. 1);
PNG
media_image6.png
250
472
media_image6.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method and device taught by Yan and Hou by performing an intersection smoothing processing to obtain a smooth curve corresponding to the set of line segments as taught by Fan. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to enhance visual aesthetics and visualize continuous and dynamic motion.
Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yan, Hou and Fan in view of Kang et al. (“A unified scheme for adaptive stroke-based rendering”, 2006), hereinafter referenced as Kang.
Regarding Claims 5 and 18, the combination of Yan and Fan disclose the method and electronic device of Claims 4 and 17 respectively. They further disclose
performing attribute configuration on the smooth curve of the set of line segments, and obtaining the respective texture curves of the at least one set of line segments after the attribute configuration ends (Yan: [0025], discloses establishing a one-to-one correspondence between the texture material sub-blocks and the filled sub regions <half of quadrilaterals, made up of line segments>; [0027], discloses the texture material sub-blocks are filled <attribute configuration> into the corresponding subregions on the video frame <where line segments making up the quadrilaterals and subregions are also effected by the fill>).
They do not disclose
performing a color attribute and/or a width attribute configuration
However, Kang discloses
performing a color attribute and/or a width attribute configuration (Kang: [Sections 2 and 3], discloses a width attribute for outlines; [Section 3], discloses a color attribute for lines; [Section 3]; discloses color and width attributes for lines).
PNG
media_image7.png
194
526
media_image7.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method and device disclosed by Yan and Fan by performing a color attribute and/or a width attribute configuration disclosed by Kang. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to impart artistic styling on line effects.
Claims 8 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Yan and Hou in view of Ogawa (US 2016/0110853 A1), hereinafter referenced as Ogawa.
Regarding Claims 8 and 21, the combination of Hou and Yan disclose the method and electronic device of Claims 7 and 20 respectively. They do not disclose the limitations of Claim 8, however, Ogawa discloses
performing a position correction processing on the effect line frame in the target image, to obtain a corrected target image (Ogawa: [0140], discloses adjusting the positions of pixels in an image <position correction processing> by adjusting the pixels an adjustment region <interpreting as line effect frame> in an image <therefore obtaining a corrected target image>).
It would have been obvious to one of ordinary skill in the art before the effective filing data of the claimed invention to apply and/or modify the method and device disclosed by the combination of Yan and Hou by performing position correction to obtained an adjusted image as taught by Ogawa. One of ordinary skill in the art before the effective filing date would have been motivated to make this modification to reduce measurement errors and misalignments.
Claims 9 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Yan and Hou in view of Naphade et al. (US 2022/0053171 A1), hereinafter referenced as Naphade.
Regarding Claims 9 and 22, the combination of Yan and Hou disclose the method and electronic device of Claims 1 and 11 respectively. They further disclose
in response to detecting that the user device is playing the image to be processed, rendering the target image corresponding to the effect line frame according to a rendering attribute corresponding to the effect line frame (Yan: [0080], discloses adding real-time outline texture effects <rendering attribute, which corresponds to the texture map> to the video that a user is currently playing <it is known that the user is playing the video in real time >; [Fig. 1], discloses, in step 104, obtaining the video frame after a texture map <line effect frame, which corresponds to the real-time outline texture effects> is applied);
and controlling the user device to present the rendered target image (Yan: [0080], discloses adding real-time texture effects to the video that a user is currently playing <effects are rendered frame by frame>).
and collecting an image to be processed from the target video according to a collection frequency (Yan: [0090], describes the target video being processed frame by frame at a preset frame interval <reading on collection frequency>),
sent by user device for a target video during a process of playing a target video by the user device (Yan: [0077], teaches the target video can be a video file selected by the user or a video being shot real-time);
They do not disclose
receiving a line effect processing request sent by user device for a target video during a process of playing a target video by the user device;
in response to the line effect processing request, obtaining a time-stamp initiating the line effect processing request
and collecting, based on the time-stamp, a corresponding image to be processed from the target video
However, Naphade discloses
receiving a line effect processing request for a target video during a process of playing a target video by the user device (Naphade: [0004], discloses a device providing <user device> a request to annotate <line effect processing request> a video stream <a target video during the process of playing the target video>);
in response to the line effect processing request, obtaining a time-stamp initiating the line effect processing request (Naphade: [0004], recites “In processing the request or query, the timestamps may be used to retrieve video data representing frames of the video stream.”);
and collecting, based on the time-stamp, a corresponding image to be processed from the target video (Naphade: [0004], discloses timestamps used to retrieve video data representing frames).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method and device disclosed by Yan and Hou by receiving a user request and obtaining a timestamp with its’ corresponding image as taught by Naphade. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to reduce computational load and structure data chronologically.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Wong (US 10529129 B2) discloses a method for creating a visual effect in a video.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISABELLA OCHSNER whose telephone number is (571)272-9322. The examiner can normally be reached 7:30 - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.O./Examiner, Art Unit 2618
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618