Prosecution Insights
Last updated: April 19, 2026
Application No. 19/050,095

METHODS AND SYSTEMS FOR GENERATING INTERACTIVE COMPOSITE MEDIA ASSETS COMPRISING COMMON OBJECTS

Non-Final OA §DP
Filed
Feb 10, 2025
Examiner
WENDMAGEGN, GIRUMSEW
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Emergex LLC
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
742 granted / 968 resolved
+18.7% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
984
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
35.1%
-4.9% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 968 resolved cases

Office Action

§DP
DETAILED ACTION DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim2-19 of U.S. Patent No. 12,033,672. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the patent anticipates the claims of the present application. Patent No.12,033,672 Application No.19/050,095 Claim2 recites a method for generating interactive composite media assets comprising common objects by coordinating uncoordinated content using best-fit models applied to perimeters about the common objects, the method comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location at a shared time window; determining a shared object orientation based on the center-of-mass point for the shared geographic location, wherein the shared object orientation comprises a direction that a content capture device faced when capturing a respective media asset and locations on a near-continuous perimeter about the center-of-mass point; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first location information, first time information, and first object information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second location information, second time information, and second object information; determining that the first media asset and the second media asset correspond to the shared geographic location based on analyzing the first location information and the second location information; in response to determining that the first media asset and the second media asset correspond to the shared geographic location, determining that the first media asset and the second media asset correspond to the shared time window based on analyzing the first time information and the second time information; in response to determining that the first media asset and the second media asset correspond to the shared time window, determining that the first media asset and the second media asset correspond to the shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more of the locations on the near-continuous perimeter about the center-of-mass point; in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset, in a user interface, based on the first media asset and the second media asset, by merging the first media asset and the second media asset about the center-of-mass point; receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim1 recites one or more non-transitory, computer-readable media comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first location information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second location information; determining that the first media asset and the second media asset correspond to the shared geographic location based on analyzing the first location information; in response to determining that the first media asset and the second media asset correspond to the shared geographic location, determining that the first media asset and the second media asset correspond to a shared object orientation based on an identified object; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset, using an artificial intelligence model, by merging the first media asset and the second media asset about the center-of-mass point. Claim2 recites…; in response to determining that the first media asset and the second media asset correspond to the shared time window, determining that the first media asset and the second media asset correspond to the shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more of the locations on the near-continuous perimeter about the center-of-mass point; … Claim3 recites he method of claim 2, wherein the locations on the near-continuous perimeter about the center-of-mass point are determined by filtering media assets to determine a smallest contained shape about the center-of-mass point. Claim2 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset further comprises determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center- of-mass point, wherein the one or more locations on the near-continuous perimeter about the center-of-mass point are determined by filtering media assets to determine a smallest contained shape about the center-of-mass point. Claim4 recites the method of claim 2, wherein the locations on the near-continuous perimeter about the center-of-mass point are determined by applying a best-fit mechanism to the center-of-mass point. Claim3 recites the one or more non-transitory, computer-readable media of claim 2, wherein the one or more locations on the near-continuous perimeter about the center-of-mass point are determined by applying a best-fit mechanism to the center-of-mass point. Claim5 recites the method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting an object detail in the first media asset; and rotating the first media asset based on the object detail. Claim4 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting an object detail in the first media asset; and rotating the first media asset based on the object detail Claim6 recites the method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting a first object in the first media asset, wherein the first object has a first scale; detecting a second object in the second media asset, wherein the second object has a second scale; and using scale spaced merging to merge the first object and the second object. Claim5 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting a first object in the first media asset, wherein the first object has a first scale; detecting a second object in the second media asset, wherein the second object has a second scale; and using scale spaced merging to merge the first object and the second object. Claim7 recites the method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: determining a number of objects in the first media asset and the second media asset to be blurred; and determining a level of blur based on the number of objects. Claim6 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: determining a number of objects in the first media asset and the second media asset to be blurred; and determining a level of blur based on the number of objects. Claim8 recites the method of claim 2, further comprising: generating for display, in the user interface, a mapping of available media assets for the composite media asset; and generating for display, in the user interface, an indicator of the available media assets based on the first location information and the second location information. Claim7 recites the one or more non-transitory, computer-readable media of claim 1, further comprising: generating for display, in the user interface, a mapping of available media assets for the composite media asset; and generating for display, in the user interface, an indicator of the available media assets based on the first location information and the second location information. Claim2 recites…; receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim8 recites the one or more non-transitory, computer-readable media of claim 1, further comprising: receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim9 recites the method of claim 2, wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a third user input indicating the shared geographic location; and in response to receiving the third user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim9 recites the one or more non-transitory, computer-readable media of claim1, Wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a third user input indicating the shared geographic location; and in response to receiving the third user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim2 recites…; determining that the first media asset and the second media asset correspond to the shared time window based on analyzing the first time information and the second time information. Claim10 recites the one or more non-transitory, computer-readable media of claim 9, further comprising: determining that the first media asset and the second media asset correspond to a shared time window based on analyzing first time information in a first data structure and second time information in a second data structure. Claim10 recites the method of claim 9, wherein determining that the first media asset and the second media asset correspond to the shared time window further comprises: receiving a fourth user input indicating the shared time window; and in response to receiving the fourth user input indicating the shared time window, filtering the first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim11 recites he one or more non-transitory, computer-readable media of claim 10, wherein determining that the first media asset and the second media asset correspond to the shared time window further comprises: receiving a fourth user input indicating the shared time window; and in response to receiving the fourth user input indicating the shared time window, filtering the first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim11 recites the method of claim 2, wherein determining that the first media asset and the second media asset correspond to the shared object orientation further comprises: identifying a known object corresponding to the center-of-mass point; retrieving a plurality of known object details for the known object at the shared object orientation; and determining a known object detail of the plurality of known object details is in both the first media asset and the second media asset. Claim12 recites The one or more non-transitory, computer-readable media of claim 1, further comprising determining that the first media asset and the second media asset correspond to the shared object orientation by: identifying a known object corresponding to the center-of-mass point; retrieving a plurality of known object details for the known object at the shared object orientation; and determining a known object detail of the plurality of known object details is in both the first media asset and the second media asset. Claim12 recites the method of claim 2, wherein generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset further comprises: identifying a shared object in both the first media asset and the second media asset; and generating a representation of the shared object in the composite media asset using a first object detail from the first media asset and a second object detail from the second media asset, wherein the second media asset does not comprise the first object detail and the first media asset does not comprise the second object detail. Claim13 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset further comprises: identifying a shared object in both the first media asset and the second media asset; and generating a representation of the shared object in the composite media asset using a first object detail from the first media asset and a second object detail from the second media asset, wherein the second media asset does not comprise the first object detail and the first media asset does not comprise the second object detail. Claim13 recites the method of claim 2, wherein: the first location information indicates a first geographic location corresponding to the first media asset; the first time information indicates a first time corresponding to the first media asset; and the first object information indicates a first object included with the first media asset. Claim14 recites the one or more non-transitory, computer-readable media of claim 1, wherein the first location information indicates a first geographic location corresponding to the first media asset, and wherein the first media asset data structure further comprises: first time information indicates a first time corresponding to the first media asset; and first object information indicates a first object included with the first media asset. Claim14 recites the method of claim 2, wherein the first media asset comprises a plurality of frames, and wherein retrieving the first media asset data structure for the first media asset further comprises: determining a first frame of the plurality of frames for generating the composite media asset; determining a subset of the first media asset data structure that corresponds to the first frame; and retrieving the first location information, the first time information, and the first object information from the subset of the first media asset data structure. Claim15 recites the one or more non-transitory, computer-readable media of claim 14, wherein the first media asset comprises a plurality of frames, and wherein retrieving the first media asset data structure for the first media asset further comprises: determining a first frame of the plurality of frames for generating the composite media asset; determining a subset of the first media asset data structure that corresponds to the first frame; and retrieving the first location information, the first time information, and the first object information from the subset of the first media asset data structure. Claim15 recites the method of claim 2, wherein generating the composite media asset based on the first media asset and the second media asset further comprises: identifying a first portion of the first media asset corresponding to an out-of-focus object; selecting a second portion of the second media asset corresponding to the out-of-focus object in the first media asset; and replacing the first portion of the first media asset with the second portion. Claim16 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset based on the first media asset and the second media asset further comprises: identifying a first portion of the first media asset corresponding to an out-of-focus object; selecting a second portion of the second media asset corresponding to the out-of-focus object in the first media asset; and replacing the first portion of the first media asset with the second portion. Claim16 recites the method of claim 2, wherein receiving the first user input requesting the composite media asset based on the center-of-mass point for the shared geographic location at the shared time window comprises: receiving a user selection of an object in the first media asset; determining a geographic location in which the object is found; assigning the geographic location as the shared geographic location; and assigning a position of the object at the geographic location as the center-of-mass point. Claim17 recites the one or more non-transitory, computer-readable media of claim 1, wherein receiving the first user input requesting the composite media asset based on the center-of-mass point for the shared geographic location comprises: receiving a user selection of an object in the first media asset; determining a geographic location in which the object is found; assigning the geographic location as the shared geographic location; and assigning a position of the object at the geographic location as the center-of-mass point. Claim17 recites non-transitory, computer-readable medium comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location at a shared time window; determining a shared object orientation based on the center-of-mass point for the shared geographic location, wherein the shared object orientation comprises a direction that a content capture device faced when capturing a respective media asset and locations, determined by filtering media assets to determine a smallest contained shape about the center-of-mass point, on a near-continuous perimeter about the center-of-mass point; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first location information, first time information, and first object information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second location information, second time information, and second object information; determining that the first media asset and the second media asset correspond based on comparing the first media asset data structure to the second media asset data structure; in response to determining that the first media asset and the second media asset correspond, determining that the first media asset and the second media asset correspond to the shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more of the locations on the near-continuous perimeter about the center-of-mass point; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset about the center-of-mass point, wherein the composite media asset comprises a portion of a 360 degree media asset from the locations on the near-continuous perimeter about the center-of-mass point. Claim18 recites one or more non-transitory, computer-readable media comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first time information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second time information; determining that the first media asset and the second media asset correspond based on comparing the first media asset data structure to the second media asset data structure; in response to determining that the first media asset and the second media asset correspond, determining that the first media asset and the second media asset correspond to a shared object orientation based on determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center-of- mass point; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset about the center-of-mass point. Claim18 recites the non-transitory, computer-readable medium of claim 17, wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a second user input indicating the shared geographic location; and in response to receiving the second user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim19 recites the one or more non transitory, computer-readable media of claim 18, wherein determining that the first media asset and the second media asset correspond further comprises: receiving a second user input indicating a shared geographic location; and in response to receiving the second user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim19 recites the non-transitory, computer-readable medium of claim 18, wherein determining that the first media asset and the second media asset correspond to the shared time window further comprises: receiving a third user input indicating the shared time window; and in response to receiving the third user input indicating the shared time window, filtering first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim20 recites the one or more non-transitory, computer-readable media of claim 19, wherein the operations further comprise determining that the first media asset and the second media asset correspond to a shared time window further by: receiving a third user input indicating the shared time window; and in response to receiving the third user input indicating the shared time window, filtering first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, 5-20 of U.S. Patent No. 12,287,822. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the patent anticipate the claims of the present application. Patent No.12,287,822 Application No. 19/050,095 Claim2 recites a method for generating interactive composite media assets comprising common objects by coordinating uncoordinated content using best-fit models applied to perimeters about the common objects, the method comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first location information and first object information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second location information and second object information; determining that the first media asset and the second media asset correspond to the shared geographic location based on analyzing the first location information and the second location information; in response to determining that the first media asset and the second media asset correspond to the shared geographic location, determining that the first media asset and the second media asset correspond to a shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center-of-mass point; in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset, in a user interface, based on the first media asset and the second media asset, by merging the first media asset and the second media asset about the center-of-mass point; receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim1 recites one or more non-transitory, computer-readable media comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first location information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second location information; determining that the first media asset and the second media asset correspond to the shared geographic location based on analyzing the first location information; in response to determining that the first media asset and the second media asset correspond to the shared geographic location, determining that the first media asset and the second media asset correspond to a shared object orientation based on an identified object; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset, using an artificial intelligence model, by merging the first media asset and the second media asset about the center-of-mass point. Claim2 recites…; in response to determining that the first media asset and the second media asset correspond to the shared geographic location, determining that the first media asset and the second media asset correspond to a shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center-of-mass point… Claim2 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset further comprises determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center- of-mass point, wherein the one or more locations on the near-continuous perimeter about the center-of-mass point are determined by filtering media assets to determine a smallest contained shape about the center-of-mass point. Claim2 recites a method for generating interactive composite media assets comprising common objects by coordinating uncoordinated content using best-fit models applied to perimeters about the common objects… Claim3 recites the one or more non-transitory, computer-readable media of claim 2, wherein the one or more locations on the near-continuous perimeter about the center-of-mass point are determined by applying a best-fit mechanism to the center-of-mass point. Claim5 recites he method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting an object detail in the first media asset; and rotating the first media asset based on the object detail. Claim4 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting an object detail in the first media asset; and rotating the first media asset based on the object detail Claim6 recites the method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting a first object in the first media asset, wherein the first object has a first scale; detecting a second object in the second media asset, wherein the second object has a second scale; and using scale spaced merging to merge the first object and the second object. Claim5 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: detecting a first object in the first media asset, wherein the first object has a first scale; detecting a second object in the second media asset, wherein the second object has a second scale; and using scale spaced merging to merge the first object and the second object. Claim7 recites the method of claim 2, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: determining a number of objects in the first media asset and the second media asset to be blurred; and determining a level of blur based on the number of objects. Claim6 recites the one or more non-transitory, computer-readable media of claim 1, wherein merging the first media asset and the second media asset about the center-of-mass point further comprises: determining a number of objects in the first media asset and the second media asset to be blurred; and determining a level of blur based on the number of objects. Claim8 recites the method of claim 2, further comprising: generating for display, in the user interface, a mapping of available media assets for the composite media asset; and generating for display, in the user interface, an indicator of the available media assets based on the first location information and the second location information. Claim7 recites the one or more non-transitory, computer-readable media of claim 1, further comprising: generating for display, in the user interface, a mapping of available media assets for the composite media asset; and generating for display, in the user interface, an indicator of the available media assets based on the first location information and the second location information. Claim2 recites … receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim8 recites the one or more non-transitory, computer-readable media of claim 1, further comprising: receiving, via the user interface, a second user input requesting a movement within the shared geographic location; and in response to the second user input, determining a new center-of-mass point for the composite media asset. Claim9 recites the method of claim 2, wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a third user input indicating the shared geographic location; and in response to receiving the third user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim9 recites the one or more non-transitory, computer-readable media of claim1, wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a third user input indicating the shared geographic location; and in response to receiving the third user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim10 recites the method of claim 9, further comprising: determining that the first media asset and the second media asset correspond to a shared time window based on analyzing first time information in the first media asset data structure and second time information in the second media asset data structure. Claim10 recites the one or more non-transitory, computer-readable media of claim 9, further comprising: determining that the first media asset and the second media asset correspond to a shared time window based on analyzing first time information in a first data structure and second time information in a second data structure. Claim11 recites the method of claim 10, wherein determining that the first media asset and the second media asset correspond to the shared time window further comprises: receiving a fourth user input indicating the shared time window; and in response to receiving the fourth user input indicating the shared time window, filtering the first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim11 recites he one or more non-transitory, computer-readable media of claim 10, wherein determining that the first media asset and the second media asset correspond to the shared time window further comprises: receiving a fourth user input indicating the shared time window; and in response to receiving the fourth user input indicating the shared time window, filtering the first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim12 recites the method of claim 2, further comprising determining that the first media asset and the second media asset correspond to the shared object orientation by: identifying a known object corresponding to the center-of-mass point; retrieving a plurality of known object details for the known object at the shared object orientation; and determining a known object detail of the plurality of known object details is in both the first media asset and the second media asset. Claim12 recites The one or more non-transitory, computer-readable media of claim 1, further comprising determining that the first media asset and the second media asset correspond to the shared object orientation by: identifying a known object corresponding to the center-of-mass point; retrieving a plurality of known object details for the known object at the shared object orientation; and determining a known object detail of the plurality of known object details is in both the first media asset and the second media asset. Claim13 recites the method of claim 2, wherein generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset further comprises: identifying a shared object in both the first media asset and the second media asset; and generating a representation of the shared object in the composite media asset using a first object detail from the first media asset and a second object detail from the second media asset, wherein the second media asset does not comprise the first object detail and the first media asset does not comprise the second object detail. Claim13 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset further comprises: identifying a shared object in both the first media asset and the second media asset; and generating a representation of the shared object in the composite media asset using a first object detail from the first media asset and a second object detail from the second media asset, wherein the second media asset does not comprise the first object detail and the first media asset does not comprise the second object detail. Claim14 recites he method of claim 2, wherein the first location information indicates a first geographic location corresponding to the first media asset; first time information indicates a first time corresponding to the first media asset; and the first object information indicates a first object included with the first media asset. Claim14 recites the one or more non-transitory, computer-readable media of claim 1, wherein the first location information indicates a first geographic location corresponding to the first media asset, and wherein the first media asset data structure further comprises: first time information indicates a first time corresponding to the first media asset; and first object information indicates a first object included with the first media asset. Claim15 recites he method of claim 14, wherein the first media asset comprises a plurality of frames, and wherein retrieving the first media asset data structure for the first media asset further comprises: determining a first frame of the plurality of frames for generating the composite media asset; determining a subset of the first media asset data structure that corresponds to the first frame; and retrieving the first location information, the first time information, and the first object information from the subset of the first media asset data structure. Claim15 recites the one or more non-transitory, computer-readable media of claim 14, wherein the first media asset comprises a plurality of frames, and wherein retrieving the first media asset data structure for the first media asset further comprises: determining a first frame of the plurality of frames for generating the composite media asset; determining a subset of the first media asset data structure that corresponds to the first frame; and retrieving the first location information, the first time information, and the first object information from the subset of the first media asset data structure. Claim16 recites he method of claim 2, wherein generating the composite media asset based on the first media asset and the second media asset further comprises: identifying a first portion of the first media asset corresponding to an out-of-focus object; selecting a second portion of the second media asset corresponding to the out-of-focus object in the first media asset; and replacing the first portion of the first media asset with the second portion. Claim16 recites the one or more non-transitory, computer-readable media of claim 1, wherein generating the composite media asset based on the first media asset and the second media asset further comprises: identifying a first portion of the first media asset corresponding to an out-of-focus object; selecting a second portion of the second media asset corresponding to the out-of-focus object in the first media asset; and replacing the first portion of the first media asset with the second portion. Claim17 recites the method of claim 2, wherein receiving the first user input requesting the composite media asset based on the center-of-mass point for the shared geographic location comprises: receiving a user selection of an object in the first media asset; determining a geographic location in which the object is found; assigning the geographic location as the shared geographic location; and assigning a position of the object at the geographic location as the center-of-mass point. Claim17 recites the one or more non-transitory, computer-readable media of claim 1, wherein receiving the first user input requesting the composite media asset based on the center-of-mass point for the shared geographic location comprises: receiving a user selection of an object in the first media asset; determining a geographic location in which the object is found; assigning the geographic location as the shared geographic location; and assigning a position of the object at the geographic location as the center-of-mass point. Claim18 recites anon-transitory, computer-readable medium comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point for a shared geographic location; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first object information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second object information; determining that the first media asset and the second media asset correspond based on comparing the first media asset data structure to the second media asset data structure; in response to determining that the first media asset and the second media asset correspond, determining that the first media asset and the second media asset correspond to a shared object orientation based on (i) analyzing the first object information and the second object information and (ii) determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center-of-mass point; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset about the center-of-mass point. Claim18 recites one or more non-transitory, computer-readable media comprising instructions that when executed by one or more processors cause operations comprising: receiving a first user input requesting a composite media asset based on a center-of-mass point; retrieving a first media asset data structure for a first media asset, wherein the first media asset data structure comprises first time information; retrieving a second media asset data structure for a second media asset, wherein the second media asset data structure comprises second time information; determining that the first media asset and the second media asset correspond based on comparing the first media asset data structure to the second media asset data structure; in response to determining that the first media asset and the second media asset correspond, determining that the first media asset and the second media asset correspond to a shared object orientation based on determining that the first media asset and the second media asset correspond to one or more locations on a near-continuous perimeter about the center-of- mass point; and in response to determining that the first media asset and the second media asset correspond to the shared object orientation, generating the composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset about the center-of-mass point. Claim19 recites the non-transitory, computer-readable medium of claim 18, wherein determining that the first media asset and the second media asset correspond to the shared geographic location further comprises: receiving a second user input indicating the shared geographic location; and in response to receiving the second user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim19 recites the one or more non transitory, computer-readable media of claim 18, wherein determining that the first media asset and the second media asset correspond further comprises: receiving a second user input indicating a shared geographic location; and in response to receiving the second user input indicating the shared geographic location, filtering a plurality of available media assets based on a comparison of respective location information for the plurality of available media assets and the shared geographic location to generate a first subset of media assets. Claim20 recites the non-transitory, computer-readable medium of claim 19, wherein the operations further comprise determining that the first media asset and the second media asset correspond to a shared time window further by: receiving a third user input indicating the shared time window; and in response to receiving the third user input indicating the shared time window, filtering first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Claim20 recites the one or more non-transitory, computer-readable media of claim 19, wherein the operations further comprise determining that the first media asset and the second media asset correspond to a shared time window further by: receiving a third user input indicating the shared time window; and in response to receiving the third user input indicating the shared time window, filtering first subset of media assets based on a comparison of respective time information for the first subset of media assets and the shared time window to generate a second subset of media assets. Allowable Subject Matter Claim21 is allowed. The following is a statement of reasons for the indication of allowable subject matter: claim21 recites one or more non-transitory, computer-readable media comprising instructions that when executed by one or more processors cause operations comprising: retrieving a first media asset data structure for a first media asset and a second media asset data structure for a second media asset; determining that the first media asset and the second media asset correspond based on comparing the first media asset data structure to the second media asset data structure; in response to determining that the first media asset and the second media asset correspond, determining that the first media asset and the second media asset correspond to a first location subset on a first near-continuous perimeter about a center-of-mass point; in response to determining that the first media asset and the second media asset correspond to the first location subset on the first near-continuous perimeter about the center-of-mass point, generating a first composite media asset based on the first media asset and the second media asset by merging the first media asset and the second media asset; receiving a user input requesting a movement based on the first composite media asset; in response to the user input, retrieving a third media asset data structure for a third media asset and a fourth media asset data structure for a fourth media asset; determining that the third media asset and the fourth media asset correspond based on comparing the third media asset data structure to the fourth media asset data structure; in response to determining that the third media asset and the fourth media asset correspond, determining a second location subset on a second near-continuous perimeter about the center-of-mass point; and generating a second composite media asset based on the third media asset and the fourth media asset by merging the third media asset and the fourth media asset about the center-of-mass point. The closest prior arts, Tamir et al. US 2017/0178687, either singularly or in combination fails to anticipate or render the underlined limitations obvious. Note: Claims1-20 would be allowable if applicant overcomes the applied non-statutory double patenting rejections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIRUMSEW WENDMAGEGN whose telephone number is (571)270-1118. The examiner can normally be reached 9:00-7:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. GIRUMSEW WENDMAGEGN Primary Examiner Art Unit 2484 /GIRUMSEW WENDMAGEGN/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Feb 10, 2025
Application Filed
Mar 21, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604086
TELESCOPE WITH AT LEAST ONE VIEWING CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602939
INSPECTION SYSTEM AND INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604068
SELECTIVE PLAYBACK OF AUDIO AT NORMAL SPEED DURING TRICK PLAY OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597445
COLLABORATIVE ENHANCEMENT OF VOLUMETRIC VIDEO WITH A DEVICE HAVING MULTIPLE CAMERAS
2y 5m to grant Granted Apr 07, 2026
Patent 12598319
METHODS AND SYSTEMS FOR STORING AERIAL IMAGES ON A DATA STORAGE DEVICE
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
98%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 968 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month