DETAILED ACTION
This office action is responsive to the response filed 7/17/2025. The application contains claims 1, 3-13, 15-20, all examined and rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 objected to because of the following informalities: claim recites “executed to edit to the digital content” instead of “executed to edit the digital content”. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claim limitations in amended claim 12 have been interpreted under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, because it uses a non-structural term “edit recording system” coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the non-structural term is not preceded by a structural modifier.
Claim 12 recites the limitation "an edit recording system” coupled with functional language “generating” without reciting sufficient structure to achieve the function.
Since these claim limitations invoke 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, claims 12-16 are interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph limitation:
Fig. 2, 116, 104, ¶27, “The computing device 102 is illustrated as including a content processing system 104. The content processing system 104 is implemented at least partially in hardware of the computing device 102 to process and transform digital content 106”, Based on the guidelines announced from Federal Register Vol. 76, No. 27, this has been interpreted as encompassing a hardware or hardware in combination with software implementation of the edit recording system, but not a pure software implementation.
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Claimed modules also trigger interpretation of the claim language under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph since they are considered a place holder for a corresponding structure in the specification.
If applicant does not wish to have the claim limitation treated under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, applicant may amend the claim so that it will clearly not invoke 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, or present a sufficient showing that the claim recites sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance with 35 U.S.C. § 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites “automatically generating, the detecting, the generating, the receiving, the providing, the searching, the displaying execution of the at least one edit operation, and the displaying the logs are performed in real time as the user input is received via the user interface”. It is unclear how the user input that is used to identify a location within the display area that is logged in a log data for a search query could be done in real time with the creation of the log data associated with the editing operation that the same input is used to search within.
Claims 1-11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “logs identifying … edit operations performed at the indicated location”. It is unclear which indicated locations the claim limitations refers to as the claim disclose indicated locations outside and inside the rectangular area. For examination purposes examiner consider the indicated locations as the area within the selected area. Dependent claims inherit the independent claim deficiency.
Claims 12-16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites “logs identifying … edit operations performed at the indicated location”. It is unclear which indicated locations the claim limitations refers to as the claim disclose indicated locations outside and inside the rectangular area. For examination purposes examiner consider the indicated locations as the area within the selected area. Dependent claims inherit the independent claim deficiency.
Claims 17-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 recites “logs identifying … edit operations performed at the indicated location”. It is unclear which indicated locations the claim limitations refers to as the claim disclose indicated locations outside and inside the rectangular area. For examination purposes examiner consider the indicated locations as the area within the selected area. Dependent claims inherit the independent claim deficiency.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-13, 15-16 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below.
When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts, (2019 PEG)
STEP 1.
Per Step 1, the claims fall within one of the statutory categories as in independent Claim 1, 12 in the therefrom dependent claims. Therefore, the claims are directed to a statutory eligibility category.
Step 2A: The invention is directed to searching a log data based on a received search request and providing the results to a user (Mental Process). As such, the claims include an abstract idea.
The limitation that has been identified as an abstract idea:
“generating, log data that describes a plurality of edit operations executed to edit digital content, the log data including location data indicating a plurality of locations within the digital content at which the plurality of edit operations are executed” (Mental Process, user can generate log data); “user input identifying a location within the digital content displayed in a user interface, the location defining an area within the content” (Mental Process, user can select a point within displayed content); “generating, a search query based on the location, the search query including a set of coordinates specifying a boundary of the area within the content” (Metal Process, user can start a search based on a point within displayed content), “searching and based on the search query the log data by filtering out logs of the log data associated with location data indicating locations outside of the area; and obtaining, based on the filtering, a search result including logs of the log data associated with location data indicating locations disposed within the area, the logs identifying the location data and at least one edit operation of the plurality of edit operations performed at the indicated locations” (Mental Process, user can identify logs associated with a specific location with a displayed content).
This judicial exception is not integrated into a practical application.
The claim recite additional elements as “method implemented by a computing device”, “by the computing device”, “digital content” are limitations that invokes computers or other machinery merely as a tool to perform an existing process (MPEP 2106.05(f)(2), “detecting, by the computing device, a user input” (Insignificant Extra-Solution Activity, MPEP 2106.05(g)), “generating, by the computing device”, “providing, by the computing device” , “receiving, by the computing device”, “displaying, by the computing device”, are limitations that invokes computers or other machinery merely as a tool to perform an existing process (MPEP 2106.05(f)(2), in addition does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016) (MPEP 2106.05(f)(1)). “TLI Communications provides an example of a claim invoking computers and other machinery merely as a tool to perform an existing process. The court stated that the claims describe steps of recording, administration and archiving of digital images, and found them to be directed to the abstract idea of classifying and storing digital images in an organized manner. 823 F.3d at 612, 118 USPQ2d at 1747. The court then turned to the additional elements of performing these functions using a telephone unit and a server and noted that these elements were being used in their ordinary capacity (i.e., the telephone unit is used to make calls and operate as a digital camera including compressing images and transmitting those images, and the server simply receives data, extracts classification information from the received data, and stores the digital images based on the extracted information). 823 F.3d at 612-13, 118 USPQ2d at 1747-48. In other words, the claims invoked the telephone unit and server merely as tools to execute the abstract idea. Thus, the court found that the additional elements did not add significantly more to the abstract idea because they were simply applying the abstract idea on a telephone network without any recitation of details of how to carry out the abstract idea”).
The elements are recited at a high level of generality, i.e. a generic computing system performing generic functions including generic processing and receiving of data. Accordingly the additional elements do not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore the claims are directed to an abstract idea. (2019 Revised Patent Subject Matter Eligibility Guidance ("2019 PEG"). Thus, under Step 2A of the Mayo framework, the Examiner holds that the claims are directed to concepts identified as abstract.
STEP 2B.
Because the claims include one or more abstract ideas, the examiner now proceeds to Step 2B of the analysis, in which the examiner considers if the claims include individually or as an ordered combination limitations that are "significantly more" than the abstract idea itself. This includes analysis as to whether there is an improvement to either the "computer itself," "another technology," the "technical field," or significantly more than what is "well-understood, routine, or conventional" in the related arts.
The instant application includes in Claim 1 additional steps to those deemed to be abstract idea.
“method implemented by a computing device”, “by the computing device”, “digital content” are limitations that invokes computers or other machinery merely as a tool to perform an existing process (MPEP 2106.05(f)(2), in addition does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it", “detecting, by the computing device, a user input” (Insignificant Extra-Solution Activity, MPEP 2106.05(g), and based on court decisions well understood, routine and conventional computer functions or mere instruction and/or insignificant activity have been identified to include: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321,120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); In Bilski referring to Flook, where Flook determined that an insignificant post-solution activity does not makes an otherwise patent ineligible claim patent eligible. In Bilski, the court added to Flook that pre-solution (such as data gathering) and insignificant step in the middle of a process (such as receiving user input) to be equally ineffective, MPEP 2106.05 (d)) that does not integrate a judicial exception into a practical application or provide significantly more, “device”, are limitations that invokes computers or other machinery merely as a tool to perform an existing process (MPEP 2106.05(f)(2)) and does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016) (MPEP 2106.05(f)(1)). “TLI Communications provides an example of a claim invoking computers and other machinery merely as a tool to perform an existing process. The court stated that the claims describe steps of recording, administration and archiving of digital images, and found them to be directed to the abstract idea of classifying and storing digital images in an organized manner. 823 F.3d at 612, 118 USPQ2d at 1747. The court then turned to the additional elements of performing these functions using a telephone unit and a server and noted that these elements were being used in their ordinary capacity (i.e., the telephone unit is used to make calls and operate as a digital camera including compressing images and transmitting those images, and the server simply receives data, extracts classification information from the received data, and stores the digital images based on the extracted information). 823 F.3d at 612-13, 118 USPQ2d at 1747-48. In other words, the claims invoked the telephone unit and server merely as tools to execute the abstract idea. Thus, the court found that the additional elements did not add significantly more to the abstract idea because they were simply applying the abstract idea on a telephone network without any recitation of details of how to carry out the abstract idea”). Examiner further notes that receiving data, user input and generating output is Well-Understood, Routine, Conventional Activity.
In the instant case, Claim 1 is directed to above mentioned abstract idea. Technical functions such as receiving, and processing data are common and basic functions in computer technology. The individual limitations are recited at a high level and do not provide any specific technology or techniques to perform the functions claimed.
Looking to MPEP 2106.05 (d), based on court decisions well understood, routine and conventional computer functions or mere instruction and/or insignificant activity have been identified to include: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321,120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); In Bilski referring to Flook, where Flook determined that an insignificant post-solution activity does not makes an otherwise patent ineligible claim patent eligible. In Bilski, the court added to Flook that pre-solution (such as data gathering) and insignificant step in the middle of a process (such as receiving user input) to be equally ineffective. The claims does not provide any specific process with respect to the additional elements that would transform the function beyond what is well understood. Like as found in Electric Power Group, Bilski, the technical process to implement the input and display functions are conventional and well understood.
In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps does not add "significantly more" by virtue of considering the steps as a whole, as an ordered combination. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments using what is well-understood, routine, and conventional in the related arts. The steps are still a combination made to the abstract idea. The additional steps only add to those abstract ideas using well-understood and conventional functions, and the claims do not show improved ways of, for example, an unconventional non-routine functions for analyzing documents or receiving user input or generating output that could then be pointed to as being "significantly more" than the abstract ideas themselves. Moreover, Examiner was not able to identify any "unconventional" steps, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is well-understood, routine, and conventional in the related arts.
Further, note that the limitations, in the instant claims, are done by the generically recited computing devices. The limitations are merely instructions to implement the abstract idea on a computing device that is recited in an abstract level and require no more than a generic computing devices to perform generic functions.
CONCLUSION
It is therefore determined that the instant application not only represents an abstract idea identified as such based on criteria defined by the Courts and on USPTO examination guidelines, but also lacks the capability to bring about "Improvements to another technology or technical field" (Alice), bring about "Improvements to the functioning of the computer itself" (Alice), "Apply the judicial exception with, or by use of, a particular machine" (Bilski), "Effect a transformation or reduction of a particular article to a different state or thing" (Diehr), "Add a specific limitation other than what is well-understood, routine and conventional in the field" (Mayo), "Add unconventional steps that confine the claim to a particular useful application" (Mayo), or contain "Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment" (Alice), transformed a traditionally subjective process performed by humans into a mathematically automated process executed on computers (McRO), or limitations directed to improvements in computer related technology, including claims directed to software (Enfish).
The dependent claims, when considered individually and as a whole, likewise do not provide “significantly more” than the abstract idea for similar reasons as the independent claim. For example claim 3 disclose “wherein the log data includes: operation data describing the plurality of edit operations used to edit the digital content through interaction with the user interface of a content processing system; and time data indicating a time at which the plurality of edit operations are executed, respectively (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that doesn't integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 4, disclose” wherein the user input identifies the location by specifying the boundary within the digital content or a digital object displayed within the digital content in the user interface” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 5 disclose “generating a digital video depicting a timelapse sequence of the at least one edit operation from the search result as being used to edit the digital content, and wherein the displaying execution of the at least one edit operation is performed using the digital video” (extra insignificant solution that is well-understood, routine, and conventional) as shown by Titi et al. [US 20150350591 A1] See at least ¶13, William et al. [US 20150350544 A1] See at least ¶¶5-6, Park et al. [US 20190364208 A1] See at least ¶75, and does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016) (MPEP 2106.05(f)(1)), claim 6, disclose “the digital video depicts selection of representations used to initiate the execution of the at least one edit operation” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 7 disclose “user input further identifies a particular edit operation of the plurality of edit operations, the search query specifies the particular edit operation, and the at least one edit operation of the search result corresponds to the particular edit operation as performed at the location” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 8 disclose “user input includes selecting a representation from a plurality of representations of edit operations used to edit the digital content, the plurality of representations generated by searching the log data associated with the digital content” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 9 disclose “user input further identifies a digital object within the digital content; the search query specifies the digital object; and the at least one edit operation in the search result corresponds to the digital object” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 10 disclose “automatically generating, the detecting, the generating, the receiving, the providing, the searching, the displaying execution of the at least one edit operation, and the displaying the logs are performed in real time as the user input is received via the user interface” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claim 11 disclose “wherein the digital content is a digital image” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use) that does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea.
The dependent claims which impose additional limitations also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1; where all claims are directed to the same abstract idea, "addressing each claim of the asserted patents [is] unnecessary." Content Extraction &. Transmission LLC v, Wells Fargo Bank, Natl Ass'n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claims for the other statutory classes are similarly analyzed. For example claim 12 extra limitation of outputting a selectable list of data is extra insignificant solution that is well-understood, routine, and conventional) as shown by AVOYAN et al. [US 2021/0027510 A1] See at least Fig. 7, Fig. 3A, ¶69, ¶70, Havoc Pennington et al. [US 11,687,212 B2] See at least Fig. 4-6, Fig. 8 Quinn et al. [US 20100241507 A1] See at least Fig. 8, and does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016) (MPEP 2106.05(f)(1)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Harris et al. [US 2013/0120439 A1, hereinafter Harris] in view of Brouwer et al. [US 2019/0096439 A1, hereinafter Ahmed].
With regard to Claim 1,
Harris teach in a digital medium environment, a method implemented by a computing device, the method comprising:
automatically generating, by the computing device, log data that describes a plurality of edit operations executed to edit digital content, the log data including location data indicating a plurality of locations within the digital content at which the plurality of edit operations are executed (Fig. 1, Fig. 7, ¶¶3-4, “frame identifier may include a frame number and/or a timestamp, in various embodiments. The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶¶5-6, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier. … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶105, ¶113, ¶¶35-36, “frame identifier of each frame of the animation includes a timestamp, and each entry of the log includes a timestamp, such a determination may be performed by searching the captured image data and interaction log for matching timestamps. In other embodiments, if the frame identifier of each frame includes a frame number, each entry of the log may include a corresponding frame number, rather than to a timestamp, and these frame numbers may be matched up to determine a correlation between frames of the animation and the log entries”, ¶98, “as the animation is built, the system may use information about the target location of each image editing operation to include hints about the region of interest for a given operation in the captured image data and/or interaction logs”, specific frame and portions of image is a location data within digital content);
detecting, by the computing device, a user input identifying a location within the digital content displayed in a user interface (Abstract, ¶42, “if the user wishes to undo an image editing operation, they may select a visual rewind operation in the image editing application”, ¶95, “image editing application may provide mechanisms to allow the user to navigate to the point in the animation displayed by the visual rewind operation corresponding to the image state to which the user wants to return”, ¶¶4-5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier”, user identify a location (time point/frame) within the media displayed in a user interface);
generating, by the computing device, a search query based on the location, the search query identifying the location (Fig. 1, Fig. 7, ¶3, “image editing application may be configured to capture, compress, and store image data and interaction logs, and to use the stored information in a visual rewind operation, in which a sequence of frames (e.g., an animation) depicting changes in an image as image editing operations are performed is displayed in reverse order”, ¶4, “frame identifier may include a frame number and/or a timestamp, in various embodiments. The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier. … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶10, ¶¶35-36, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”, navigating to a specific frame (location) is a search query);
providing, by the computing device, the search query as input to a search module (¶5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier. … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶¶42-45, navigating to a specific timestamp of the media (location) is a search query);
searching, by the search module and based on the search query, the log data by filtering out logs of the log data associated with location data indicating locations outside of the [identified location] (¶4, “frame identifier may include a frame number and/or a timestamp, ¶5, “each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”, ¶36, “the frame identifier of each frame of the animation includes a timestamp, and each entry of the log includes a timestamp, such a determination may be performed by searching the captured image data and interaction log for matching timestamps …”, identifying frames based on the timestamp selected by the user rewinding to a specific time point will exclude any of the frames not associated with the selected frame); and
obtaining, based on the filtering, a search result including logs of the log data associated with location data indicating locations, the logs identifying the location data and at least one edit operation of the plurality of edit operations performed at the indicated locations (¶3, “image editing application may be configured to capture, compress, and store image data and interaction logs, and to use the stored information in a visual rewind operation, in which a sequence of frames (e.g., an animation) depicting changes in an image as image editing operations are performed is displayed in reverse order”, ¶4, “frame identifier may include a frame number and/or a timestamp, in various embodiments. The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier. … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶43, “in response to receiving the indication of the invocation of the visual rewind operation, the method may include initiating the display of a sequence of frames (e.g., an animation) depicting the effect(s) of one or more image editing operations on an image during performance of the operation(s) in reverse order”, ¶34, “As a log generator or generates and stores entries in the interaction log, a timestamp and/or frame number may be recorded in the each log entry“, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”, ¶36, “the frame identifier of each frame of the animation includes a timestamp, and each entry of the log includes a timestamp, such a determination may be performed by searching the captured image data and interaction log for matching timestamps …”);
receiving, by the computing device, the search result, displaying, by the computing device, execution of the at least one edit operation on the digital content in the user interface based on the search result (Fig. 1, Fig. 7, ¶6, “system may be configured to store the captured image data, the log data, and data representing the correlation between each of the plurality of entries and the respective frame identifier. In performing a subsequent operation of the image editing application (e.g., a visual rewind operation)”, ¶43, “in response to receiving the indication of the invocation of the visual rewind operation, the method may include initiating the display of a sequence of frames (e.g., an animation) depicting the effect(s) of one or more image editing operations on an image during performance of the operation(s) in reverse order”, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”); and
displaying, by the computing device, logs identifying the location data (¶4, “The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”) and identifying the at least one edit operation of the plurality of edit operations performed at the indicated locations (¶4, “The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”), the logs searchable to locate a respective said edit operation based on a respective said location at which the respective said edit operation was executed to edit to the digital content (¶45, “system may be configured to allow a user to initiate a rewind operation, pause, fast forward, fast reverse, and/or rewind again in various combinations (causing various frames of the animation to be displayed in forward or reverse order) before identifying a point of interest in the animation”, ¶4, “The system may also be configured to capture data representing a log of interactions and corresponding changes in application state for the image editing application during performance of the one or more image editing operations”, ¶5, “system may be configured to determine a correlation between each of the entries in the log and a respective frame identifier … each frame identifier includes a timestamp and each entry of the log includes a timestamp, determining a correlation may include identifying a log entry and a frame identifier that include matching timestamps”).
Harris does not explicitly teach the location defining an area within the digital content; search query including set of coordinates specifying a boundary of the area; location data indicating locations disposed within the area.
Ahmed teach in a digital medium environment, a method implemented by a computing device, the method comprising:
automatically generating, by the computing device, log data that describes a plurality of edit operations executed to edit digital content, the log data including location data indicating a plurality of locations within the digital content at which the plurality of edit operations are executed (Fig. 12, ¶84, “Screen Tags (20, 20 b, 20 c, 20 d, 20 e)”, Fig. 24, 300, ¶49, “user clicks on the side of the cowling of an airplane (20) to mark a location to place an annotation or Screen Tag. The Screen Tag serves as an anchor, placed by a user at a specific position (12) in a movie or image. This anchor or Screen Tag (20) is associated with the (x,y) coordinate of the position”, ¶50, “Screen Tag is an icon on screen depicting the type of content associated with that identified object. The Screen Tag can be clicked to reveal the associated type of content”);
detecting, by the computing device, a user input identifying a location within the digital content displayed in a user interface, the location defining an area within the digital content (Fig. 12, ¶84, “Screen Tags (20, 20 b, 20 c, 20 d, 20 e)”, Fig. 24, 300, ¶49, “user clicks on the side of the cowling of an airplane (20) to mark a location to place an annotation or Screen Tag. The Screen Tag serves as an anchor, placed by a user at a specific position (12) in a movie or image. This anchor or Screen Tag (20) is associated with the (x,y) coordinate of the position”, ¶50, “Screen Tag is an icon on screen depicting the type of content associated with that identified object. The Screen Tag can be clicked to reveal the associated type of content”, ¶53, “During playback the system will display the Screen Tag for a duration of, for example, 4 seconds and the user may in this time have sufficient time to click and explore the Screen Tag”);
generating, by the computing device, a search query based on the location, the search query including a set of coordinates specifying a boundary of the area within the digital content (Fig. 12, ¶16, ¶50, “Screen Tag is an icon on screen depicting the type of content associated with that identified object. The Screen Tag can be clicked to reveal the associated type of content”, ¶84, “Screen Tags (20, 20 b, 20 c, 20 d, 20 e)”, Fig. 24, 300, ¶49, “user clicks on the side of the cowling of an airplane (20) to mark a location to place an annotation or Screen Tag. The Screen Tag serves as an anchor, placed by a user at a specific position (12) in a movie or image. This anchor or Screen Tag (20) is associated with the (x,y) coordinate of the position”, ¶64, “object tracking is used differently. In the present invention, object tracking is used to detect and track a collection of pixels from a location selected by a user corresponding to an object for a predetermined time duration”, ¶¶67-68, “a period long enough for a user to see a Screen Tag and to interact with it when a video is running showing at least one Screen Tag”, ¶76, “The idea is to offer a method of grouping Screen Tags shown on screen so that they can be displayed, filtered, searched or hidden by the viewer”, ¶88, “Screen Tags are used, the clickable Screen Tags (20) will follow their positively identified elements … and will then disappear unless a user decides to click on them, which will cause the video to pause to reveal the information of the tag”);
Providing, by the computing device, the search query as input to a search module ¶¶58-59, “system may use one of several known methods for tracking the collection of pixels around the location where the user wants to insert the tag”; searching by the searching module and based on the search query, the log data by:
Filtering out logs of the log data associated with location data indicating locations outside of the area (¶50, “Screen Tag can be clicked to reveal the associated type of content”, ¶¶58-59, “system may use one of several known methods for tracking the collection of pixels around the location where the user wants to insert the tag”, ¶63, “pixel tracking analysis is only required for identifying a specific element the user clicked on“); and
obtaining, based on the filtering, a search result including logs of the log data associated with location data indicating locations disposed within the area, the logs identifying the location data and at least one edit operation of the plurality of edit operations performed at the indicated locations (Fig. 12, Fig. 24, ¶84, “Screen Tags (20, 20 b, 20 c, 20 d, 20 e)”, Fig. 24, 302, ¶49, “user clicks on the side of the cowling of an airplane (20) to mark a location to place an annotation or Screen Tag. The Screen Tag serves as an anchor, placed by a user at a specific position (12) in a movie or image. This anchor or Screen Tag (20) is associated with the (x,y) coordinate of the position”, ¶52. “ Pixel region tracking is used to determine whether the collection of pixels in proximity to the location identified by the user can be tracked for a predetermined length of, for example, 4 seconds. The system will determine whether the pixels are trackable over this period so that a Screen Tag can be placed at the identified position“, adding a tag to a specific location is editing the displayed media content and selecting a tag that is associated with the annotation and displayed in the annotation location is a search for the location associated data (excluding other locations) that could include attached videos);
receiving, by the computing device, the search result (Fig. 12, Fig. 24, ¶84, “Screen Tags (20, 20 b, 20 c, 20 d, 20 e)”, Fig. 24, 302, ¶49, “user clicks on the side of the cowling of an airplane (20) to mark a location to place an annotation or Screen Tag. The Screen Tag serves as an anchor, placed by a user at a specific position (12) in a movie or image. This anchor or Screen Tag (20) is associated with the (x,y) coordinate of the position”, ¶¶58-60, “object tracking analysis to determine whether or not the pixels corresponding to the selected object at the clicked location”, adding a tag to a specific location is editing the displayed media content and selecting a tag that is associated with the annotation and displayed in the annotation location is a search for the location associated data that could include attached videos), and
displaying, by the computing device, [video] on the digital content in the user interface based on the search result (Fig. 10, 40, ¶¶78-79, “as shown in FIG. 10, a user might add a description (37) or add, or link a file, or a video (40) or other data to the Description Tag”, Fig. 12, Fig. 24, 300, ¶¶64-65, “Screen Tag should be visible long enough for a user to notice it and to decide to whether or not to click and interact with it”, ¶77, “When clicking on the tag the user can access the additional information associated with that tag”); and
displaying, by the computing device, the logs identifying the location data and identifying the at least one edit operation of the plurality of edit operations performed at the indicated locations,, the log data searchable to locate a respective said edit operation based on a respective said location at which the respective said edit operation was executed to edit to the digital content (Fig. 10, 40, ¶¶78-79, “as shown in FIG. 10, a user might add a description (37) or add, or link a file, or a video (40) or