Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,030

ELECTRONIC DEVICE OBTAINING METADATA WHILE OBTAINING VIDEO AND METHOD THEREOF

Final Rejection §101§103
Filed
Jun 16, 2023
Examiner
ALLEN, LUCIUS CAMERON GREE
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
27 granted / 38 resolved
+9.1% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
20 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of AIA Status The present application is being examined under the AIA the first inventor to file provisions. Information Disclosure Statement The information disclosure statements (IDS) submitted on 06/16/2023, 05/02/2024, and 11/27/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1, and 6 are objected to because of the following informalities: In claim 1, Line 13-14 the term “the identified angle being changing by exceeding a” should be changed to “the identified angle being for typographical/grammar issues to avoid clarity issues to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. In claim 6, Line 3-4 the term “border line of the video exceeds a designated threshold” should be changed to “border line of the video exceeding a designated threshold” for typographical/grammar issues to avoid clarity issues. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 10, and 19 and their dependent claims 2-9, 11-18, and 20 are rejected under 35 U.S.C. 101. Regarding claim 1 and its dependent claims 2-9, Step 1 Analysis: Claim 1 is directed to a device, which falls within one of the four statutory categories. Step 2A prong 1 analysis: Claim 1 recites, in part, “identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; and obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range or the identified magnitude of the rotational motion being exceeding a designated magnitude” as drafted, are limitations that, under broadest reasonable interpretation, covers merely obtaining information. The limitations of: “identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; and obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range or the identified magnitude of the rotational motion being exceeding a designated magnitude” can be understood to be a simple gathering of data from sensors. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. The claim recites the following additional elements: An electronic device comprising a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: The additional elements as mentioned – recited at a high level of generality (An electronic device comprising a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to:) such that it amounts to no more than a mere device for gathering of data. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Accordingly, these additional element do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04. (a)(2). III.C. Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all the Foregoing reasons, claim 1 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claims 2-9 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claim 2-9, each recites, wherein clauses to further specify the abstract idea elements of claim 1 to include more abstract idea such as “identify at least one subject in the video; and obtain the information, based on the identified at least one subject in a designated portion being spaced apart from a border line of the video by a distance that is less than a designated distance.” hence still recite abstract ideas. Accordingly, the dependent claims 2-9 are not patent eligible under 101. Regarding claim 10 and its dependent claims 11-18, Step 1 Analysis: Claim 10 is directed to a method, which falls within one of the four statutory categories. Step 2A prong 1 analysis: Claim 10 recites, in part, “identifying an angle between a first housing and a second housing, in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device; identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; and obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range or the identified magnitude of the rotational motion being exceeding a designated magnitude” as drafted, are limitations that, under broadest reasonable interpretation, covers merely obtaining information. The limitations of: “identifying an angle between a first housing and a second housing, in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device; identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; and obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range or the identified magnitude of the rotational motion being exceeding a designated magnitude” can be understood to be a simple gathering of data from sensors. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. The claim recites the following additional elements: A method performed by an electronic device comprising: The additional elements as mentioned – recited at a high level of generality (A method performed by an electronic device comprising:) such that it amounts to no more than mere instructions to apply the exception. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Accordingly, these additional element do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04. (a)(2). III.C. Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all the Foregoing reasons, claim 10 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claims 11-18 do not provide elements that overcome the deficiencies of the independent claim 10. Moreover, claim 11-18, each recites, wherein clauses to further specify the abstract idea elements of claim 10 to include more abstract idea such as “identifying at least one subject in the video; and obtaining the information indicating the time interval, based on the identified at least one subject in a designated portion that is spaced apart from a border line of the video by a distance less than a designated distance” hence still recite abstract ideas. Accordingly, the dependent claims 11-18 are not patent eligible under 101. Regarding claim 19 and its dependent claims 20, Step 1 Analysis: Claim 19 is directed to a device, which falls within one of the four statutory categories. Step 2A prong 1 analysis: Claim 19 recites, in part, “identify, by using the at least one sensor, the motion of the electronic device, in a state of obtaining a video by controlling the at least one camera, based on a shooting input; identify a position of at least one subject of the video, based on the identified motion of the electronic device, which corresponds to one of designated motions for segmenting the video; and obtain first metadata comprising a time interval in which one of the designated motions is identified or the position of the at least one subject” as drafted, are limitations that, under broadest reasonable interpretation, covers merely obtaining information. The limitations of: “identify that a direction of the at least one subject of the video is outside a designated range; and obtain second metadata comprising the direction of the at least one subject, based on identifying that the direction of the at least one subject exceeds the designated range” can be understood to be a simple gathering of data from sensors. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. The claim recites the following additional elements: An electronic device comprising: The additional elements as mentioned – recited at a high level of generality (An electronic device comprising:) such that it amounts to no more than mere device for gathering data. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Accordingly, these additional element do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04. (a)(2). III.C. Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all the Foregoing reasons, claim 19 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claim 20 does not provide elements that overcome the deficiencies of the independent claim 19. Moreover, claim 20 recites, wherein clauses to further specify the abstract idea elements of claim 19 to include more abstract idea such as “identify that a direction of the at least one subject of the video is outside a designated range; and obtain second metadata comprising the direction of the at least one subject, based on identifying that the direction of the at least one subject exceeds the designated range.” hence still recite abstract ideas. Accordingly, the dependent claim 20 is not patent eligible under 101. The applicant is respectfully advised to amend the claims to include subject matter from Specification paragraph [0104] such as “The electronic device may display a visual object for canceling the input that accepts the segment, on the screen. The electronic device may restore the segmented time interval 601, based on an input to the visual object for canceling the input that accepts the segment.” To overcome the 101 by including a use for the data that has been acquired by the electronic device. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 10 are rejected under 35 U.S.C 103 as being unpatentable over Kim et al. (US 20220214852 A1) hereafter referenced as Kim in view of Pacurariu et al. (US 20150187390 A1) hereafter referenced as Pacurariu and Ishikawa et al. (US 20210096361 A1) hereafter referenced as Ishikawa. Regarding claim 1, Kim explicitly teaches an electronic device comprising (Fig. 1A-B, Paragraph [0025]- Kim discloses an electronic device 101 according to certain embodiments may include a foldable housing 110, a folding part 116, a first display 120, a second display 122, a first sensor 130, a second sensor 132, a third sensor 134, and/or at least component 136. The first display 120 and/or the second display 122 may be flexible and/or foldable displays arranged in a space formed by the foldable housing 110.): a first housing (Fig. 1A, #112 called a first housing. Paragraph [0026]- Kim discloses the foldable housing 110 may include a first housing 112 and a second housing 114.); a second housing (Fig. 1A, #114 called a second housing. Paragraph [0026]- Kim discloses the foldable housing 110 may include a first housing 112 and a second housing 114.); a first sensor (Fig. 1B, Paragraph [0029]- Kim discloses the first sensor 130 may be disposed in a space formed by the first housing 112.); a second sensor (Fig. 1A, Paragraph [0030]- Kim discloses the second sensor 132 may be disposed in a space formed by the second housing.); at least one camera provided on the first housing (Fig. 1A, Paragraph [0029]- Kim discloses the first sensor 130 may be mounted while being included in another subsidiary material included in at least a part of the first housing 112. The first sensor 130 may include at least one among at least one camera sensor or at least one ultra-wide band (UWB) sensor); and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera (Fig. 1A, Paragraph [0043]- Kim discloses the folding part 116 may include an angle sensing sensor 160. For example, the angle sensing sensor 160 may be a degree sensor capable of sensing an angle formed by the first housing 112 and the second housing 114.); Kim fails to explicitly teach identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; or the identified magnitude of the rotational motion being exceeding a designated magnitude. However, Pacurariu explicitly teaches identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video (Fig. 2, Paragraph [0049]- Pacurariu discloses certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc. Motion tagging may occur in real time or during post processing.); or the identified magnitude of the rotational motion being exceeding a designated magnitude (Fig. 2, Paragraph [0049]- Pacurariu discloses Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Pacurariu identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; or the identified magnitude of the rotational motion being exceeding a designated magnitude. Wherein having Kim’s system of imaging on a device with multiple housings wherein identify a magnitude of a rotational motion of the electronic device, in the state of obtaining the video; or the identified magnitude of the rotational motion being exceeding a designated magnitude. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Kim in view of Pacurariu fails to explicitly teach obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range. However, Ishikawa explicitly teaches obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range (Fig. 5, Paragraph [0197]- Ishikawa discloses in a case where the amount of the change from the first angle of view to the second angle of view exceeds a predetermined threshold value, the video generation section generates a second video different from a first video at a time when the user is in the acceleration state in the virtual space.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Ishikawa obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range. Wherein having Kim’s system of imaging on a device with multiple housings wherein obtain information for segmenting a portion of the video, which corresponds to a time interval in which at least one of the identified angle being changing by exceeding a designated range. The motivation behind the modification would have been to allow for a cleaner transition between frames, since both Kim and Ishikawa are both systems that display data to a user. Wherein Kim’s system provides a way to accurately acquire environmental data, while Ishikawa’s system provides a way to reduce screen shake. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Ishikawa et al. (US 20210096361 A1) Paragraph [0003]. Regarding claim 10, Kim explicitly teaches a method performed by an electronic device comprising (Fig. 1, Paragraph [0010]- Kim discloses a method for operating an electronic device may include sensing an angle change between a first housing and a second housing of the electronic device): identifying an angle between a first housing (Fig. 1A, #112 called a first housing. Paragraph [0026]- Kim discloses the foldable housing 110 may include a first housing 112 and a second housing 114.) and a second housing (Fig. 1A, #114 called a second housing. Paragraph [0026]) in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device (Fig. 1A, Paragraph [0043]- Kim discloses the folding part 116 may include an angle sensing sensor 160. For example, the angle sensing sensor 160 may be a degree sensor capable of sensing an angle formed by the first housing 112 and the second housing 114.); Kim fails to explicitly teach identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video or the identified magnitude of the rotational motion being exceeding a designated magnitude. However, Pacurariu explicitly teaches identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video (Fig. 2, Paragraph [0049]- Pacurariu discloses certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc. Motion tagging may occur in real time or during post processing.); or the identified magnitude of the rotational motion being exceeding a designated magnitude (Fig. 2, Paragraph [0049]- Pacurariu discloses Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Pacurariu identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video or the identified magnitude of the rotational motion being exceeding a designated magnitude. Wherein having Kim’s system of imaging on a device with multiple housings wherein identifying a magnitude of a rotational motion of the electronic device, in the state of obtaining the video or the identified magnitude of the rotational motion being exceeding a designated magnitude. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Kim in view of Pacurariu fails to explicitly teach obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range. However, Ishikawa explicitly teaches obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range (Fig. 5, Paragraph [0197]- Ishikawa discloses in a case where the amount of the change from the first angle of view to the second angle of view exceeds a predetermined threshold value, the video generation section generates a second video different from a first video at a time when the user is in the acceleration state in the virtual space.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Ishikawa obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range. Wherein having Kim’s system of imaging on a device with multiple housings wherein obtaining information for segmenting a portion corresponding to a time interval of the video, in which at least one of the angle being changing by exceeding a designated range. The motivation behind the modification would have been to allow for a cleaner transition between frames, since both Kim and Ishikawa are both systems that display data to a user. Wherein Kim’s system provides a way to accurately acquire environmental data, while Ishikawa’s system provides a way to reduce screen shake. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Ishikawa et al. (US 20210096361 A1) Paragraph [0003]. Claims 2-5 and 11-14 are rejected under 35 U.S.C 103 as being unpatentable over Kim et al. (US 20220214852 A1) hereafter referenced as Kim in view of Pacurariu et al. (US 20150187390 A1) hereafter referenced as Pacurariu, Ishikawa et al. (US 20210096361 A1) hereafter referenced as Ishikawa, and Waggoner et al. (US 20170177197 A1) hereafter referenced as Waggoner. Regarding claim 2, Kim in view of Pacurariu and Ishikawa explicitly teaches the electronic device of claim 1, Kim further teaches wherein the processor is further configured to: identify at least one subject in the video (Fig. 1, Paragraph [0029]- Kim discloses the UWB wireless technology may be used without limitation in frequency. The UWB wireless technology may be used for a radar function such as measuring the distance between the electronic device 101 and a subject and/or tracking the position of the subject.); Kim in view of Pacurariu and Ishikawa fails to explicitly teach obtain the information, based on the identified at least one subject in a designated portion being spaced apart from a border line of the video by a distance that is less than a designated distance. However, Waggoner explicitly teaches obtain the information, based on the identified at least one subject in a designated portion being spaced apart from a border line of the video by a distance that is less than a designated distance (Fig. 3, Paragraph [0030]- Waggoner discloses where the representation of the object of interest is near the edge of the frame it may not be possible to center the object in the displayed view, but the process can attempt to center the object of interest to the extent possible.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu and Ishikawa of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Waggoner obtain the information, based on the identified at least one subject in a designated portion being spaced apart from a border line of the video by a distance that is less than a designated distance. Wherein having Kim’s system of imaging on a device with multiple housings wherein obtain the information, based on the identified at least one subject in a designated portion being spaced apart from a border line of the video by a distance that is less than a designated distance. The motivation behind the modification would have been to allow for more access to information about the subject, since both Kim and Waggoner are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Waggoner’s system provides a way to get a better view of the subject. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Waggoner et al. (US 20170177197 A1) Paragraph [0053]. Regarding claim 3, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the electronic device of claim 2, Kim fails to explicitly teach wherein the processor is further configured to obtain the time interval, based on the identified rotational motion of the electronic device, by using the second sensor. However, Pacurariu explicitly teaches wherein the processor is further configured to obtain the time interval, based on the identified rotational motion of the electronic device, by using the second sensor (Fig. 2, Paragraph [0049]- Pacurariu discloses some motion data may be derived, for example, from data sampled from the motion sensor 135 or the GPS sensor 130 and/or from data in the motion track 220 and/or the geolocation track 225. Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Pacurariu wherein the processor is further configured to obtain the time interval, based on the identified rotational motion of the electronic device, by using the second sensor. Wherein having Kim’s system of imaging on a device with multiple housings wherein the processor is further configured to obtain the time interval, based on the identified rotational motion of the electronic device, by using the second sensor. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Regarding claim 4, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the electronic device of claim 3, Kim fails to explicitly teach wherein the processor is further configured to obtain the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range, based on the second sensor. However, Pacurariu explicitly teaches wherein the processor is further configured to obtain the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range, based on the second sensor (Fig. 2, Paragraph [0049]- Pacurariu discloses Some motion data may be derived, for example, from data sampled from the motion sensor 135 or the GPS sensor 130 and/or from data in the motion track 220 and/or the geolocation track 225. Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Pacurariu wherein the processor is further configured to obtain the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range, based on the second sensor. Wherein having Kim’s system of imaging on a device with multiple housings wherein the processor is further configured to obtain the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range, based on the second sensor. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Regarding claim 5, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the electronic device of claim 4, Kim further teaches further comprising a screen associated with editing of the video (Fig. 1A, Paragraph [0073]- Kim discloses the first application operating in conjunction with the at least one sensor or the display may include an application for providing a user interface by using data acquired from the at least one sensor, or an application for transmitting the data acquired from the at least one sensor to at least one other electronic device.), Kim fails to explicitly teach wherein the processor is further configured to display, on the screen, a guide for indicating the information and the time interval. However, Pacurariu explicitly teaches wherein the processor is further configured to display, on the screen, a guide for indicating the information and the time interval (Fig. 1, Paragraph [0035]- Pacurariu discloses the user interface 145 may be communicatively coupled (either wirelessly or wired) and may include any type of input/output device including buttons and/or a touchscreen. The user interface 145 may be communicatively coupled with the controller 120 and/or the memory 125 via wired or wireless interface.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Pacurariu wherein the processor is further configured to display, on the screen, a guide for indicating the information and the time interval. Wherein having Kim’s system of imaging on a device with multiple housings wherein the processor is further configured to display, on the screen, a guide for indicating the information and the time interval. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Regarding claim 11, Kim in view of Pacurariu and Ishikawa, explicitly teaches the method of claim 10, Kim further teaches further comprising: identifying at least one subject in the video (Fig. 1, Paragraph [0029]- Kim discloses the UWB wireless technology may be used without limitation in frequency. The UWB wireless technology may be used for a radar function such as measuring the distance between the electronic device 101 and a subject and/or tracking the position of the subject.); Kim in view of Pacurariu and Ishikawa fails to explicitly teach obtaining the information indicating the time interval, based on the identified at least one subject in a designated portion that is spaced apart from a border line of the video by a distance less than a designated distance. However, Waggoner explicitly teaches obtaining the information indicating the time interval, based on the identified at least one subject in a designated portion that is spaced apart from a border line of the video by a distance less than a designated distance (Fig. 3, Paragraph [0030]- Waggoner discloses where the representation of the object of interest is near the edge of the frame it may not be possible to center the object in the displayed view, but the process can attempt to center the object of interest to the extent possible.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu and Ishikawa of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Waggoner obtaining the information indicating the time interval, based on the identified at least one subject in a designated portion that is spaced apart from a border line of the video by a distance less than a designated distance. Wherein having Kim’s system of imaging on a device with multiple housings wherein obtaining the information indicating the time interval, based on the identified at least one subject in a designated portion that is spaced apart from a border line of the video by a distance less than a designated distance. The motivation behind the modification would have been to allow for more access to information about the subject, since both Kim and Waggoner are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Waggoner’s system provides a way to get a better view of the subject. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Waggoner et al. (US 20170177197 A1) Paragraph [0053]. Regarding claim 12, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the method of claim 11, Kim fails to explicitly teach further comprising obtaining the time interval, based on the identified rotational motion of the electronic device, by using the second sensor. However, Pacurariu explicitly teaches, further comprising obtaining the time interval, based on the identified rotational motion of the electronic device, by using the second sensor (Fig. 2, Paragraph [0049]- Pacurariu discloses Some motion data may be derived, for example, from data sampled from the motion sensor 135 or the GPS sensor 130 and/or from data in the motion track 220 and/or the geolocation track 225. Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Pacurariu further comprising obtaining the time interval, based on the identified rotational motion of the electronic device, by using the second sensor Wherein having Kim’s system of imaging on a device with multiple housings wherein further comprising obtaining the time interval, based on the identified rotational motion of the electronic device, by using the second sensor The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Regarding claim 13, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the method of claim 12, Kim fails to explicitly teach further comprising obtaining the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range. However, Pacurariu explicitly teaches further comprising obtaining the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range (Fig. 2, Paragraph [0049]- Pacurariu discloses Some motion data may be derived, for example, from data sampled from the motion sensor 135 or the GPS sensor 130 and/or from data in the motion track 220 and/or the geolocation track 225. Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Pacurariu further comprising obtaining the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range. Wherein having Kim’s system of imaging on a device with multiple housings wherein further comprising obtaining the time interval comprising a timing at which an acceleration applied to the electronic device is identified in the designated range. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Regarding claim 14, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the method of claim 13, Kim further teaches further comprising displaying, on a screen (Fig. 1A, Paragraph [0073]- Kim discloses the first application operating in conjunction with the at least one sensor or the display may include an application for providing a user interface by using data acquired from the at least one sensor, or an application for transmitting the data acquired from the at least one sensor to at least one other electronic device.), Kim fails to explicitly teach a guide for indicating the information and the time interval, wherein the screen is associated with editing of the video. However, Pacurariu explicitly teaches a guide for indicating the information and the time interval, wherein the screen is associated with editing of the video (Fig. 1, Paragraph [0035]- Pacurariu discloses the user interface 145 may be communicatively coupled (either wirelessly or wired) and may include any type of input/output device including buttons and/or a touchscreen. The user interface 145 may be communicatively coupled with the controller 120 and/or the memory 125 via wired or wireless interface.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a method performed by an electronic device comprising: identifying an angle between a first housing and a second housing in a state of obtaining a video by controlling at least one camera provided on the first housing of the electronic device with the teachings of Pacurariu a guide for indicating the information and the time interval, wherein the screen is associated with editing of the video. Wherein having Kim’s system of imaging on a device with multiple housings wherein a guide for indicating the information and the time interval, wherein the screen is associated with editing of the video. The motivation behind the modification would have been to allow for more access to system information, since both Kim and Pacurariu are both systems that use sensors to gather data. Wherein Kim’s system provides a way to accurately acquire environmental data, while Pacurariu’s system provides a way accurately motion tag. Please see Kim et al. (US 20220214852 A1) Paragraph [0116] and Pacurariu et al. (US 20150187390 A1) Paragraph [0049-50]. Claims 6-9 and 15-18 are rejected under 35 U.S.C 103 as being unpatentable over Kim et al. (US 20220214852 A1) hereafter referenced as Kim in view of Pacurariu et al. (US 20150187390 A1) hereafter referenced as Pacurariu, Ishikawa et al. (US 20210096361 A1) hereafter referenced as Ishikawa, Waggoner et al. (US 20170177197 A1) hereafter referenced as Waggoner, and Zhang et al. (US 20170352132 A1) hereafter referenced as Zhang. Regarding claim 6, Kim in view of Pacurariu, Ishikawa, and Waggoner explicitly teaches the electronic device of claim 5, Kim fails to explicitly teach wherein the processor is further configured to obtain the time interval comprising the timing identified as exceeding a designated threshold However, Pacurariu explicitly teaches wherein the processor is further configured to obtain the time interval comprising the timing identified as exceeding a designated threshold (Fig. 2, Paragraph [0049]- Pacurariu discloses Some motion data may be derived, for example, from data sampled from the motion sensor 135 or the GPS sensor 130 and/or from data in the motion track 220 and/or the geolocation track 225. Certain accelerations or changes in acceleration that occur in a video frame or a series of video frames (e.g., changes in motion data above a specified threshold) may result in the video frame, a plurality of video frames or a certain time being tagged to indicate the occurrence of certain events of the camera such as, for example, rotations, drops, stops, starts, beginning action, bumps, jerks, etc.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim in view of Pacurariu, Ishikawa, and Waggoner of a an electronic device comprising: a first housing; a second housing; a first sensor; a second sensor; at least one camera provided on the first housing; and a processor configured to: identify an angle between the first housing and the second housing, in a state of obtaining a video by controlling the at least one camera with the teachings of Pacurariu wherein the processor is further configured to obtain the time interval comprising the timing identified as exceeding a designated threshold. Wherein having Kim’s system of imaging on a device with multiple housings wherein the processor is further configured to obtain the time interval comprising the timing identified as exceeding a designated threshold. The motivation behind the modification would have been to allow for more access to system information,
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Oct 08, 2025
Non-Final Rejection — §101, §103
Nov 30, 2025
Interview Requested
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Jan 08, 2026
Response Filed
Feb 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597105
SEMANTIC-AWARE AUTO WHITE BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12579755
OVERLAYING AUGMENTED REALITY (AR) CONTENT WITHIN AN AR HEADSET COUPLED TO A MAGNIFYING LOUPE
2y 5m to grant Granted Mar 17, 2026
Patent 12541972
Computing Device and Method for Handling an Object in Recorded Images
2y 5m to grant Granted Feb 03, 2026
Patent 12536247
Roughness Compensation Method and System, Image Processing Device, and Readable Storage Medium
2y 5m to grant Granted Jan 27, 2026
Patent 12529684
INSPECTION DEVICE, INSPECTION METHOD, AND INSPECTION PROGRAM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+39.3%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month